venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Learnable Polyphase Sampling for Shift Invariant and Equivariant Convolutional Networks Abstract We propose learnable polyphase sampling (LPS), a pair of learnable down/upsampling layers that enable truly shift-invariant and equivariant convolutional networks. LPS can be trained end-to-end from data and generalizes existing handcrafted downsampling layers. It is widely applicable as it can be integrated into any convolutional network by replacing down/upsampling layers. We evaluate LPS on image classification and semantic segmentation. Experiments show that LPS is on-par with or outperforms existing methods in both performance and shift consistency. For the first time, we achieve true shift-equivariance on semantic segmentation (PASCAL VOC), i.e., 100% shift consistency, outperforming baselines by an absolute 3.3%. Our project page and code are available at https://raymondyeh07.github.io/learnable_polyphase_sampling/ 1 Introduction For tasks like image classification, shifts of an object do not change the corresponding object label, i.e., the task is shift-invariant. This shift-invariance property has been incorporated into deep-nets yielding convolutional neural nets (CNN). Seminal works on CNNs [15, 24] directly attribute the model design to shift-invariance. For example, Fukushima [15] states “the network has an ability of position-invariant pattern recognition” and LeCun et al. [24] motivate CNNs by stating that they “ensure some degree of shift invariance.” CNNs have evolved since their conception. Modern deep-nets contain more layers, use different non-linearities and pooling layers. Re-examining these modern architectures, Zhang [56] surprisingly finds that modern deep-nets are not shift-invariant. To address this, Zhang [56] and Zou et al. [57] propose to perform anti-aliasing before each downsampling layer, and found it to improve the degree of invariance. More recently, Chaman and Dokmanic [5] show that deep-nets can be “truly shift-invariant,” i.e., a model’s output is identical for given shifted inputs. For this, they replace all downsampling layers with their adaptive polyphase sampling (APS) layer. While APS achieves true shift-invariance by selecting the max-norm polyphase component (a handcrafted downsampling scheme), an important question arises: are there more effective downsampling schemes that can achieve true shift-invariance? Consider an extreme case, a handcrafted deep-net that always outputs zeros is truly shift-invariant, but does not accomplish any task. This motivates to study how truly shift-invariant downsampling schemes can be learned from data. For this we propose Learnable Polyphase Sampling (LPS), a pair of down/upsampling layers that yield truly shift-invariant/equivariant deep-nets and can be trained in an end-to-end manner. For ∗Equal contribution. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). downsampling, LPS can be easily integrated into existing deep-net architectures by swapping out the pooling/striding layers. Theoretically, LPS generalizes APS to downsampling schemes that cannot be represented by APS. Hence, LPS’s ideal performance is never worse than that of APS. For upsampling, LPS guarantees architectures that are truly shift-equivariant, i.e., the output shifts accordingly when the input shifts. This is desirable for tasks like semantic image segmentation. To validate the proposed LPS, we conduct extensive experiments: (a) image classification on CIFAR10 [21] and ImageNet [12]; (b) semantic segmentation on PASCAL VOC [13]. We observe that the proposed approach outperforms APS and further improves anti-aliasing methods on both model performance and shift consistency. Our contributions are as follows: • We propose learnable polyphase sampling (LPS), a pair of novel down/upsampling layers, and prove that they yield truly shift-invariant/-equivariant deep-nets. Different from prior works, our sampling scheme is trained from end-to-end and not handcrafted. • We theoretically prove that LPS (downsampling) is a generalization of APS. Hence, in theory, LPS improves upon APS. • We conduct extensive experiments demonstrating the effectiveness of LPS on image classification and segmentation over three datasets comparing to APS and anti-aliasing approaches. 2 Related Work In this section, we briefly discuss related work, including shift invariant/equivariant deep-nets and pooling layers. Additional necessary concepts are reviewed in Sec. 3. Shift Invariant/equivariant convolutional networks. Modern convolutional networks use striding or pooling to reduce the amount of memory and computation in the model [17, 22, 40, 45]. As pointed out by Azulay and Weiss [1] and Zhang [56], these pooling/striding layers break the shiftinvariance property of deep-nets. To address this issue, Zhang [56] proposed to perform anti-aliasing, i.e., lowpass filtering (LPF) before each downsampling, a canonical signal processing technique for multi-rate systems [47]. We illustrate this approach in Fig. 1 (left). Zou et al. [57] further improved the LPF technique by using adaptive filters which better preserve edge information. While anti-aliasing filters are effective, Chaman and Dokmanic [5] show that true shift-invariance, i.e., 100% shift consistency, can be achieved without anti-aliasing. Specifically, they propose Adaptive Polyphase Sampling (APS) which selects the downsampling indices, i.e., polyphase components, based on the `p-norm of the polyphase components; a handcrafted rule, as illustrated in Fig. 1 (right). In a follow up technical report [4], APS is extended to upsampling using unpooling layers [2, 55], where the downsampling indices are saved to place values back to their corresponding spatial location during upsampling. Our work presents a novel pair of shift-invariant/equivariant down/upsampling layers which are trainable, in contrast to APS’s handcrafted selection rule. We note that generalizations of equivariance beyond shifts have also been studied [3, 7, 37, 39, 43, 46, 48, 52] and applied to various domains, e.g., sets [16, 31, 35, 38, 50, 53], graphs [10, 11, 19, 27, 28, 30, 32, 44, 51], spherical images [8, 9, 20], volumetric data [49], etc. In this work, we focus solely on shift-equivariance for images with CNNs. Pooling layers. Many designs for better downsampling or pooling layers have been proposed. Popular choices are Average-Pooling [23] and Max-Pooling [36]. Other generalizations also exists, e.g., LP - Pooling [42] which generalizes pooling to use different norms. The effectiveness of different pooling layers has also been studied by Scherer et al. [41]. More similar to our work is Stochastic-Pooling [54] and Mixed Max-Average Pooling [25]. Stochastic-Pooling constructs a probability distribution by normalizing activations within a window and sampling during training. In our work, we present a novel design which learns the sampling distribution. Mixed Max-Average Pooling learns a single scalar to permit a soft-choice between Max- and Average-Pooling. In contrast, our LPS has shift-equivariance guarantees while being end-to-end trainable. 3 Preliminaries We provide a brief review on equivariant and invariant functions to establish the notation. For readability, we use one-dimensional data to illustrate these ideas. In practice, these concepts are generalized to multiple channels and two-dimensional data. Shift invariance and equivariance. The concept of equivariance, a generalization of invariance, describes how a function’s output is transformed given that the input is transformed in a predefined way. For example, shift equivariance describes how the output is shifted given that the input is also shifted: think of image segmentation, if an object in the image is shifted then its corresponding mask is also shifted. A function f : RN 7→ RM is TN , {TM , I}-equivariant (shift-equivariant) if and only if (iff) ∃ T ∈ {TM , I} s.t. f(TNx) = Tf(x) ∀x ∈ RN , (1) where TNx[n] , x[(n+1) mod N ] ∀n ∈ Z denotes a circular shift, [·] denotes the indexing operator, and I denotes the identity function. This definition of equivariance handles the ambiguity that arises when shifting by one and downsampling by two. Ideally, a shift by one at the input should result in a 0.5 shift in the downsampled signal, which is not achievable on the integer grid. Hence, this definition considers either a shift by one or a no shift at the output as equivariant. Following the equivariance definition, invariance can be viewed as a special case where the transformation at the output is an identity function, I . Concretely, a function f : RN 7→ RM is TN , {I}equivariant (shift-invariant) iff f(TNx) = f(x) ∀x ∈ RN . (2) To obtain shift-invariance from shift-equivariant functions it is common to use global pooling. Observe that ∑ m f(Tx)[m] = ∑ m (T f(x))[m] (3) is shift-invariant if f is shift-equivariant, as summation is an orderless operation. Note that the composition of shift-equivariant functions maintains shift-equivariance. Hence, f can be a stack of equivariant layers, e.g., a composition of convolution layers. While existing deep-nets [17, 26, 40] do use global spatial pooling, these architectures are not shiftinvariant. This is due to pooling and downsampling layers, which are not shift-equivariant as we review next. Downsampling and pooling layers. A downsampling-by-two layer D : RN 7→ RbN/2c is defined as D(x)[n] = x[2n] ∀n ∈ Z, (4) which returns the even indices of the input x. As a shift operator makes the odd indices even, a downsampling layer is not shift-equivariant/invariant. Commonly used average or max pooling can be viewed as an average or max filter followed by downsampling, hence pooling is also not shift-equivariant/invariant. To address this issue, Chaman and Dokmanic [5] propose adaptive polyphase sampling (APS) which is an input dependent (adaptive) selection of the odd/even indices. Adaptive polyphase sampling. Proposed by Chaman and Dokmanic [5], adaptive polyphase sampling (APS) returns whether the odd or even indices, i.e., the polyphase components, based on their norms. Formally, APS : RN 7→ RbN/2c is defined as: APS(x) = { Poly(x)0 if ‖Poly(x)0‖ > ‖Poly(x)1‖ Poly(x)1 otherwise , (5) where x ∈ RN is the input and Poly(x)i denotes the polyphase components, i.e., Poly(x)0[n] = x[2n] and Poly(x)1[n] = x[2n+ 1]. (6) While this handcrafted selection rule achieves a consistent selection of the polyphase components, it is not the only way to achieve it, e.g., returning the polyphase component with the smaller norm. In this work, we study a family of shift-equivariant sampling layers and propose how to learn them in a data-driven manner. 4 Approach Our goal is to design a learnable down/upsampling layer that is shift-invariant/equivariant. We formulate down/upsampling by modeling the conditional probability of selecting each polyphase component given an input. For this we use a small neural network. This enables the sampling scheme to be trained end-to-end from data, hence the name learnable polyphase sampling (LPS). In Sec. 4.1, we introduce learnable polyphase downsampling (LPD), discuss how to train it end-to-end, and show that it generalizes APS. In Sec. 4.2, we propose a practical layer design of LPD. Lastly, in Sec. 4.3, we discuss how to perform LPS for upsampling, namely, learnable polyphase upsampling (LPU). For readability, we present the approach using one dimensional data, i.e., a row in an image. 4.1 Learnable Polyphase Downsampling We propose learnable polyphase downsampling (LPD) to learn a shift-equivariant downsampling layer. Given an input feature map x ∈ RC×N , LPD spatially downsamples the input to produce an output in RC×bN/2c via LPD(x)[c, n] = x[c, 2n+ k?] , Poly(x)k? , (7) where k? = argmaxk∈{0,1} pθ(k = k|x) and Poly(x)k? denotes the k?-th polyphase component. We model a conditional probability pθ(k|x) for selecting polyphase components, i.e., k denotes the random variable of the polyphase indices. For 1D data, there are only two polyphase components. Critically, not all pθ lead to an equivariant downsampling layer. For example, pθ(k = 0|x) = 1 results in the standard down-sampling which always returns values on even indices for 1D signals. We will next examine which family of pθ achieves a shift-equivariant downsampling layer. Shift-permutation equivariance of pθ. Consider the example in Fig. 2. We can see that a circular shift in the spatial domain induces a permutation in the polyphase components. Observe that the top-row of the polyphase component containing the blue circle and orange square are permuted to the second row when the input is circularly shifted. We now state this formally. Lemma 1. Polyphase shift-permutation property Poly(TNx)k = { Poly(x)1 if k = 0 TMPoly(x)0 if k = 1 . (8) Proof. By definition, Poly(TNx)k[n] = TNx[(2n+ k) mod N ] = x[(2n+ k + 1) mod N ] (9) = { x[(2n+ 1) mod N ] = Poly(x)1 if k = 0 x[(2(n+ 1)) mod N ] = TMPoly(x)0 if k = 1 (10) From Lemma 1, we observe that to achieve an equivariant downsampling layer a spatially shifted input should lead to a permutation of the selection probability (Claim 1). We note that pθ is said to be shift-permutation-equivariant if pθ(k = π(k)|TNx) = pθ(k = k|x), (11) where π denotes a permutation on the polyphase indices, i.e., a “swap” of indices is characterized by π(k), i.e., π(0) = 1 and π(1) = 0. Claim 1. If pθ is shift-permutation-equivariant, defined in Eq. (11), then LPD defined in Eq. (7) is a shift-equivariant downsampling layer. Proof. Let x̂ , TNx be a shifted version of x ∈ RN . Recall LPD(x) and LPD(x̂) are defined as: LPD(x) , Poly(x)k? , k? = arg max k∈{0,1} pθ(k = k|x), (12) LPD(x̂) , Poly(x̂)k̂? , k̂ ? = arg max k∈{0,1} pθ(k = k|x̂). (13) From Lemma 1, LPD(TNx) can be expressed as: LPD(TNx) = { Poly(x)1 if k̂? = 0 TMPoly(x)0 if k̂? = 1 . (14) As pθ is the shift-permutation-equivariant, k̂? = π(k?) = 1− k?. (15) Finally, combining Eq. (14) and Eq. (15), LPD(TNx) = { Poly(x)1 if k? = 1 TMPoly(x)0 if k? = 0 = ( (1− k?)TM + k?I ) · LPD(x), (16) showing that LPD satisfies the shift-equivariance definition reviewed in Eq. (1). Here, we parameterize pθ with a small neural network. The exact construction of a shift-permutation equivariant deep-net architecture is deferred to Sec. 4.2. We next discuss how to train the distribution parameters θ in LPD. End-to-end training of LPD. At training time, to incorporate stochasticity and compute gradients, we parameterize pθ using Gumbel Softmax [18, 29]. To backpropagate gradients to θ, we relax the selection of polyphase components as a convex combination, i.e., y = ∑ k zk · Poly(x)k, z ∼ pθ(k|x), (17) where z corresponds to a selection variable, i.e., ∑ k zk = 1 and zk ∈ [0, 1]. Note the slight abuse of notation as pθ(k|x) denotes a probability over polyphase indices represented in a one-hot format. We further encourage the Gumbel Softmax to behave more like an argmax by decaying its temperature τ during training as recommended by Jang et al. [18]. LPD generalizes APS. A key advantage of LPS over APS is that it can learn from data, potentially leading to a better sampling scheme than a handcrafted one. Here, we show that APS is a special case of LPD. Therefore, LPD should perform at least as well as APS if parameters are trained well. Claim 2. APS is a special case of LPD, i.e., LPD can represent APS’s selection rule. Proof. Consider a parametrization of pθ as follows, pθ(k = k|x) = exp (‖Poly(x)k‖)∑ j exp(‖Poly(x)j‖) . (18) As the exponential is a strictly increasing function we have argmax k pθ(k = k|x) = argmax k ‖Poly(x)k‖ . (19) Eq. (18) is a softmax with input ‖Poly(x)k‖, as such a function exists, LPD generalizes APS. 4.2 Practical LPD Design We aim for a conditional distribution pθ that is shift-permutation equivariant to obtain a shiftequivariant pooling layer. Let the conditional probability be modeled as: pθ(k = k|x) , exp[fθ(Poly(x)k)]∑ j exp[fθ(Poly(x)j)] , (20) where fθ : RC×H ′×W ′ 7→ R is a small network that extracts features from polyphase component Poly(x)k. We first show that pθ is shift-permutation equivariant if fθ is shift invariant. Claim 3. In Eq. (20), if fθ is shift invariant then pθ is shift-permutation equivariant (Eq. (11)). Proof. Denote a feature map x and its shifted version x̂ , TNx. By definition, pθ(k = π(k)|TNx) = exp(fθ(Poly(TNx)π(k)))∑ j exp(fθ(Poly(TNx)j)) . (21) With a shift-invariant fθ and using Lemma 1, fθ(Poly(TNx)π(k)) = fθ(TMPoly(x)k) = fθ(Poly(x)k) (22) ∴ pθ(k = π(k)|TNx) = exp(Poly(x)k)∑ j exp(k = Poly(x)j) = pθ(k = k|x) Based on the result in Claim 3, we now present a convolution based meta-architecture that satisfies the shift-permutation property. The general design principle: share parameters across polyphase indices, just as convolution achieves shift equivariance by sharing parameters, plus averaging over the spatial domain. An illustration of the proposed meta-architecture is shown in Fig. 3. Fully convolutional model. Logits are extracted from the polyphase components via fullyconvolutional operations followed by averaging along the channel and the spatial domain. Following this, f convθ is denoted as: f convθ (Poly(x)k) , 1 CM ∑ c,n f̃ convθ (Poly(x)k)[c, n], (23) where f̃ convθ : RC×M 7→ RC×M is a CNN model (without pooling layers) and M = bN/2c. The shift equivariance property of f̃ convθ guarantees that f conv θ is shift-invariant due to the global pooling. 4.3 Learnable Polyphase Upsampling (LPU) Beyond shift invariant models, we extend the theory from downsampling to upsampling, which permits to design shift-equivariant models. The main idea is to place the features obtained after downsampling back to their original spatial location. Given a feature map y ∈ RC×bN/2c downsampled via LPD from x, the upsampling layer outputs u ∈ RC×N are defined as follows: Poly(u)k? = { y, k? = argmaxk∈{0,1} pθ(k = k|x) 0, otherwise. (24) We name this layer learnable polyphase upsampling (LPU), i.e., LPU(y, pθ) , u. We now show that LPU and LPD achieve shift-equivariance. Claim 4. If pθ is shift-permutation equivariant, as defined in Eq. (11), then LPU ◦ LPD is shift-equivariant. Proof. We prove this claim following definitions of LPU, LPD and Lemma 1. The complete proof is deferred to Appendix Sec. A1. End-to-end training of LPU. As in downsampling, we also incorporate stochasticity via GumbelSoftmax. To backpropagate gradients to pθ, we relax the hard selection into a convex combination, i.e., Poly(u)k = zk · y, z ∼ pθ(k|x). (25) Anti-aliasing for upsampling. While LPU provides a shift-equivariant upsampling scheme, it introduces zeros in the output which results in high-frequency components. This is known as aliasing in a multirate system [47]. To resolve this, following the classical solution, we apply a low-pass filter scaled by the upsampling factor after each LPU. 5 Experiments We conduct experiments on image classification following prior works. We report on the same architectures and training setup. We report both the circular shift setup in APS [5] and the standard shift setup in LPF [56]. We also evaluate on semantic segmentation, considering the circular shift, inspired by APS, and the standard shift setup following DDAC [57]. For circular shift settings, the theory exactly matches the experiment hence true equivariance is achieved. To our knowledge, this is the first truly shift equivariant model reported on PASCAL VOC. 5.1 Image Classification (Circular Shift) Experiment & implementation details. Following APS, all the evaluated pooling and anti-aliasing models use the ResNet-18 [17] architecture with circular padding on CIFAR10 [21] and ImageNet [12]. Anti-alias filters are applied after each downsampling layer following LPF [56] and DDAC [57]. We also replace downsampling layers with APS [5] and our proposed LPS layer. We provide more experimental details in Appendix Sec. A4. Evaluation metrics. We report classification accuracy to quantify the model performance on the original dataset without any shifts. To evaluate shift-invariance, following APS, we report circularconsistency (C-Cons.) which computes the average percentage of predicted labels that are equal under two different circular shifts, i.e., ŷ(Circ. Shifth1,w1(I)) = ŷ(Circ. Shifth2,w2(I)), (26) where ŷ(I) denotes the predicted label for an input image I and h1, w1, h2, w2 are uniformly sampled from 0 to 32. We report the average over five random seeds. CIFAR10 results. Tab. 1 shows the classification accuracy and circular consistency on CIFAR10. We report the mean and standard deviation over five runs with different random initialization of the ResNet-18 model. We observe that the proposed LPS improves classification accuracy over all baselines while achieving 100% circular consistency. In addition to attaining perfect shift consistency, we observe that combining anti-aliasing with LPS further improves performance. ImageNet results. We conduct experiments on ImageNet with circular shift using ResNet-18. In Tab. 2, we compare with APS’s best model using a box filter (Rectangle-2), as reported by Chaman and Dokmanic [5]. While both APS and LPS achieve 100% circular consistency, our proposed LPS improves on classification accuracy in all scenarios, highlighting its advantages. 5.2 Image Classification (Standard Shift) Experiment & implementation details. To directly compare with results from LPF and DDAC, we conduct experiments on ImageNet using the ResNet-50 and ResNet-101 architectures following their setting, i.e., training with standard shifts augmentation and using convolution layers with zero-padding. Evaluation metrics. Shift consistency (S-Cons.) computes the average percentage of ŷ(Shifth1,w1(I)) = ŷ(Shifth2,w2(I)), (27) where h1, w1, h2, w2 are uniformly sampled from the interval {0, . . . , 32}. To avoid padding at the boundary, following LPF [56], we perform a shift on an image then crop its center 224× 224 region. We note that, due to the change in content at the boundary, perfect shift consistency is not guaranteed. ImageNet results. In Tab. 3, we compare to the best anti-aliasing result as reported in LPF, DDAC and DDAC∗ which is trained from the authors’ released code using hyperparameters specified in the repository. Note, in standard shift setting LPS no longer achieves true shift-invariance due to padding at the boundaries. Despite this gap from the theory, LPS achieves improvements in both performance and shift-consistency over the baselines. When compared to LPF, both ResNet-50 and ResNet-101 architecture achieved improved classification accuracy and shift-consistency. When compared to DDAC, LPS achieves comparable accuracy with higher shift-consistency. 5.3 Trainable Parameters and Inference Time While LPD is a data-driven downsampling layer, we show that the additional trainable parameters introduced by it are marginal with respect to the classification architecture. Tab. 4 shows the number of trainable parameters required by the ResNet-101 models. For each method, we report the absolute number of trainable parameters, which includes both classifier and learnable pooling weights. We also include the relative number of trainable parameters, which only considers the learnable pooling weights and the percentage it represents with respect to the default ResNet-101 architecture weights. For comparison purposes, we also include the inference time required by each model to evaluate their computational overhead. Mean and standard deviation of the inference time is computed for each method on 100 batches of size 32. Following ImageNet default settings, the image dimensions corrrespond to 224× 224× 3. Results show our proposed LPD method introduces approximately 1% additional trainable parameters on the ResNet-101 architecture, and increases the inference time roughly by 14.89 ms over the LPF anti-aliasing method (the less computationally expensive of the evaluated techniques). On the other hand, most of the overhead comes from DDAC, which increases the number of trainable parameters by approximately 4% and the inference time by approximately 55.97 ms. Overall, our comparison shows that, by equipping a classifier with LPD layers, the computational overhead is almost trivial. Despite increasing the number of trainable parameters, we empirically show that our LPD approach outperforms classifiers with significantly more parameters. Please refer to Sec. A4.3 for additional experiments comparing the performance of our ResNet-101 + LPD model against the much larger ResNet-152 classifier. LPD learns sampling schemes different from APS. To further analyze LPD, we replace all the LPD layers with APS for a ResNet-101 model trained on ImageNet. We observe a critical drop in top-1 classification accuracy from 78.8% to 0.1%, indicating that LPD did not learn a downsampling scheme equivalent to APS. We also counted how many times (across all layers) LPD selects the max-norm. On the ImageNet validation set, LPD selects the max `2-norm polyphase component only 20.57% of the time. These show LPD learned a selection rule that differs from the handcrafted APS. Qualitative study on LPD. In Fig. 4 we show the selected activations, at the fourth layer, of a ResNet-50 model with LPD. Each column describes the first 8 channels of the four possible polyphase components k ∈ {0, . . . , 3}. The component selected by LPD, denoted as k?, is boxed in blue. For comparison purposes, we also boxed the component that maximizes the `2-norm in red. We observe that LPD is distinct from APS as they select a different set of polyphase components. However, we did not observe a specific pattern that can explain LPD’s selection rule. 5.4 Semantic Segmentation (Circular Shift) Experiment & implementation details. We evaluate LPS’s down/upsampling layers on semantic segmentation. As in DDAC [57], we evaluate using the PASCAL VOC [13] dataset. Following DDAC, we use DeepLabV3+ [6] as our baseline model. We use the ResNet-18 backbone pre-trained on the ImageNet (circular shift) reported in Sec. 5.1. We experiment with using only the LPD backbone and the full LPS, i.e., both LPD and LPU. We also evaluate the performance using APS, which corresponds to a hand-crafted downsampling scheme, in combination with the default bilinear interpolation strategy from DeepLabV3+. Note that, while our LPS approach consists of both shift equivariant down and upsampling schemes (LPD and LPU, respectively), APS only operates on the downsampling process. Thus, the latter does not guarantee a circularly shift equivariant segmentation. Evaluation metric. We report mean intersection over union (mIoU) to evaluate segmentation performance. To quantify circular-equivariance, we report mean Average Segmentation Circular Consistency (mASCC) which computes the average percentage of predicted (per-pixel) labels that remained the same under two different circular shifts. I.e., a shifted image is passed to a model to make a segmentation prediction. This prediction is then “unshifted” for comparison. We report five random shift pairs for each image. Results. We report the results for PASCAL VOC in Tab. 5. Overall, we observe that LPD only and LPS achieve comparable results to DDAC and APS in mIoU. Notably, LPS achieves 100% mASCC, matching the theory. This confirms that both the proposed LPD and LPU layers are necessary and are able to learn effective down/up sampling schemes for semantic segmentation. 5.5 Semantic Segmentation (Standard Shift) Experiment & implementation details. For the standard shift setting, we directly follow the experimental setup from DDAC. We use DeepLabV3+ with a ResNet-101 backbone pre-trained on ImageNet as reported in Sec. 5.2. Evaluation metric. To quantify the shift-equivariance, following DDAC, we report the mean Average Semantic Segmentation Consistency (mASSC) which is a linear-shift version of mASCC described in Sec. 5.4 except boundary pixels are ignored. Results. In Tab. 6, we compare mIoU and mASSC of LPS to various baselines. We observe that LPS achieves improvements in mIoU and consistency when compared to DDAC∗. We note that DDAC [57] did not release their code for mASSC. For a fair comparison, we report the performance of their released checkpoint using our implementation of mASSC, indicated with DDAC∗. Despite the gap in theory and practice due to non-circular padding at the boundary, our experiments show LPS remains an effective approach to improve both shift consistency and model performance. 6 Conclusion We propose learnable polyphase sampling (LPS), a pair of shift-equivariant down/upsampling layers. LPS’s design theoretically guarantees circular shift-invariance and equivariance while being end-to-end trainable. Additionally, LPS retains superior consistency on standard shifts where theoretical assumptions are broken at image boundaries. Finally, LPS captures a richer family of shift-invariant/equivariant functions than APS. Through extensive experiments on image classification and semantic segmentation, we demonstrate that LPS is on-par with/exceeds APS, LPF and DDAC in terms of model performance and consistency. Acknowledgments: We thank Greg Shakhnarovich & PALS at TTI-Chicago for the thoughtful discussions and computation resources. This work is supported in part by NSF under Grants 1718221, 2008387, 2045586, 2106825, MRI 1725729, NIFA award 2020-67021-32799, and funding by PPG Industries, Inc. We thank NVIDIA for providing a GPU.
1. What is the focus and contribution of the paper regarding CNNs? 2. What are the strengths of the proposed approach, particularly in its implementation? 3. What are the weaknesses of the paper, especially in terms of experimentation? 4. Do you have any concerns regarding the generalization of the index-selection rule? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper tackles the issue of shift invariance/equivariance in CNNs. Prior work (APS) suggested, when downsampling, to select the index for whose value has the largest norm. This paper suggests a generalization which consists of learning the index-selection rule instead. Experiments on classification (invariance: CIFAR, ImageNet) and segmentation (equivariance ; PascalVOC) show convincing results. Learning this selection is implemented via a carefully crafted small NN and the gumbel-softmax trick. Strengths And Weaknesses Strengths The paper clearly motivates the problem. The paper gives just the right amount of background knowledge and prior work. The evaluations are to the point. Weaknesses The experiments show convincing results: shift invariance matching theory, and small but consistent improvement in "regular" accuracy measures. However, I wonder whether the small but consistent improvements may not be simply due to the fact of introducing more learnable parameters. This is almost addressed in Appendix A7, but what I am missing is the results of either (a) resnet + LPS but with slightly reduced ResNet size to match plain resnet params/speed, or (b) plain resnet with slightly increased size to match resnet + LPS params/speed. Also, this should be mentioned in the main paper, with reference to the appendix. Actually, most of the appendix should be mentioned in the main paper. Questions See above. Limitations None.
NIPS
Title Robust Bi-Tempered Logistic Loss Based on Bregman Divergences Abstract We introduce a temperature into the exponential function and replace the softmax output layer of the neural networks by a high-temperature generalization. Similarly, the logarithm in the loss we use for training is replaced by a low-temperature logarithm. By tuning the two temperatures, we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural networks by our bi-temperature generalization of the logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large datasets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method that uses the Tsallis divergence. N/A We introduce a temperature into the exponential function and replace the softmax output layer of the neural networks by a high-temperature generalization. Similarly, the logarithm in the loss we use for training is replaced by a low-temperature logarithm. By tuning the two temperatures, we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural networks by our bi-temperature generalization of the logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large datasets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method that uses the Tsallis divergence. 1 Introduction The logistic loss, also known as the softmax loss, has been the standard choice in training deep neural networks for classification. The loss involves the application of the softmax function on the activations of the last layer to form the class probabilities followed by the relative entropy (aka the Kullback-Leibler (KL) divergence) between the true labels and the predicted probabilities. The logistic loss is known to be a convex function of the activations (and consequently, the weights) of the last layer. Although desirable from an optimization standpoint, convex losses have been shown to be prone to outliers [15] as the loss of each individual example unboundedly increases as a function of the activations. These outliers may correspond to extreme examples that lead to large gradients, or misclassified training examples that are located far away from the classification boundary. Requiring a convex loss function at the output layer thus seems somewhat arbitrary, in particular since convexity in the last layer’s activations does not guarantee convexity with respect to the parameters of the network outside the last layer. Another issue arises due to the exponentially decaying tail of the softmax function that assigns probabilities to the classes. In the presence of mislabeled training examples near the classification boundary, the short tail of the softmax probabilities enforces the classifier to stretch the decision boundary towards the noisy training examples. In contrast, heavy-tailed alternatives for the softmax probabilities have been shown to significantly improve the robustness of the loss to these examples [8]. The logistic loss is essentially the negative logarithm of the predicted class probabilities, which are computed as the normalized exponentials of the inputs. In this paper, we tackle both shortcomings of the logistic loss, pertaining to its convexity as well as its tail-lightness, by replacing the logarithm and the exponential functions with their corresponding “tempered” versions. We define the function 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. logt : R+ Ñ R with temperature parameter t ě 0 as in [16]: logt(x) := 1 1 – t (x1–t – 1) . (1) The logt function is monotonically increasing and concave. The standard (natural) logarithm is recovered at the limit t Ñ 1. Unlike the standard log, the logt function is bounded from below by –1/(1 – t) for 0 ď t < 1. This property will be used to define bounded loss functions that are significantly more robust to outliers. Similarly, our heavy-tailed alternative for the softmax function is based on the tempered exponential function. The function expt : R Ñ R+ with temperature t ě 0 is defined as the inverse1 of logt, that is, expt(x) := [1 + (1 – t) x] 1/(1–t) + , (2) where [ ¨ ]+ = max{ ¨ , 0}. The standard exp function is again recovered at the limit t Ñ 1. Compared to the exp function, a heavier tail (for negative values of x) is achieved for t > 1. We use this property to define heavy-tailed analogues of softmax probabilities at the output layer. The vanilla logistic loss can be viewed as a logarithmic (relative entropy) divergence that operates on a “matching” exponential (softmax) probability assignment [11, 12]. Its convexity then stems from classical convex duality, using the fact that the probability assignment function is the gradient of the dual function to the negative entropy on the simplex. When the logt1 and expt2 are substituted instead, this duality still holds whenever t1 = t2, albeit with a different Bregman divergence, and the induced loss remains convex2. However, for t1 < t2, the loss becomes non-convex in the output activations. In particular, 0 ď t1 < 1 leads to a bounded loss, while t2 > 1 provides tail-heaviness. Figure 1 illustrates the tempered logt and expt functions as well as examples of our proposed bi-tempered logistic loss function for a two-class problem expressed as a function of the activation of the first class. The true label is assumed to be class one. Tempered generalizations of the logistic regression have been introduced before [7, 8, 22, 2]. The most recent two-temperature method [2] is based on the Tsallis divergence and contains all the previous methods as special cases. However, the Tsallis based divergences do not result in proper loss functions. In contrast, we show that the Bregman based construction introduced in this paper is indeed proper, which is a requirement for many real-world applications. 1.1 Our replacement of the softmax output layer in neural networks Consider an arbitrary classification model with multiclass softmax output. We are given training examples of the form (x, y), where x is a fixed dimensional input vector and the target y is a probability vector over k classes. In practice, the targets are often one-hot encoded binary vectors in k dimensions. Each input x is fed to the model, resulting in a vector z of inputs to the final softmax layer. This layer typically has one trainable weight vector wi per class i and yields the predicted class probability ŷi = exp(âi) řk j=1 exp(âj) = exp ( âi – log k ÿ j=1 exp(âj) ) , for linear activation âi = wi ¨ z for class i. 1When 0 ď t < 1, the domain of expt needs to be restricted to –1/(1 – t) ď x for the inverse property to hold. 2In a restricted domain when t1 = t2 < 1, as discussed later. We first replace the softmax function by a generalized heavy-tailed version that uses the expt2 function with t2 > 1, which we call the tempered softmax function: ŷi = expt2 ( âi – λt2 (â) ) , where λt2 (â) P R is s.t. k ÿ j=1 expt2 ( âj – λt2 (â) ) = 1 . This requires computing the normalization value λt2 (â) (for each example) via a binary search or an iterative procedure like the one given in Appendix A. The relative entropy between the true label y and prediction ŷ is replaced by the tempered version with temperature range 0 ď t1 < 1, k ÿ i=1 ( yi (logt1 yi – logt1 ŷi) – 1 2–t1 (y2–t1i – ŷ 2–t1 i ) ) if y one-hot = – logt1 ŷc – 1 2–t1 ( 1 – k ÿ i=1 ŷ2–t1i ) . where c = argmaxi yi is the index of the one-hot class. In later sections we prove various properties of this loss. When t1 = t2 = 1, then it reduces to the vanilla relative entropy loss with softmax. Also when 0 ď t1 < 1, then the loss is bounded, while t2 > 1 gives the tempered softmax function a heavier tail. 1.2 An illustration We provide some intuition on why both boundedness of the loss as well as tail-heaviness of the tempered softmax are crucial for robustness. For this, we train a small two-layer feed-forward neural network on a synthetic binary classification problem in two dimensions. The network has 10 and 5 units in the first and second layer, respectively3. Figure 2(a) shows the results of the logistic and our bi-tempered logistic loss on the noise-free dataset. The network converges to a desirable classification boundary (the white stripe in the figure) using both loss functions. In Figure 2(b), we illustrate the effect of adding small-margin label noise to the training examples, targeting those examples that reside near the noise-free classification boundary. The logistic loss clearly follows the noisy examples by stretching the classification boundary. On the other hand, using only the tail-heavy tempered softmax function (t2 = 4 while t1 = 1, i.e. KL divergence as the divergence) can handle the noisy examples by producing more uniform class probabilities. Next, we show the effect of large-margin noisy examples in Figure 2(c), targeting examples that are located far away from the noise-free classification boundary. The convexity of the logistic loss causes the network to be highly affected by the noisy examples that are located far away from the boundary. In contrast, only the boundedness of the loss (t1 = 0.2 while t2 = 1, meaning that the outputs are vanilla softmax probabilities) reduces the 3An interactive visualization of the bi-tempered loss is available at: https://google.github.io/ bi-tempered-loss/ effect of the outliers by allocating at most a finite amount of loss to each example. Finally, we show the effect of random label noise that includes both small-margin and large-margin noisy examples in Figure 2(d). Clearly, the logistic loss fails to handle the noise, while our bi-tempered logistic loss successfully recovers the appropriate boundary. Note for random noise, we exploit both boundedness of the loss (t1 = 0.2 < 1) as well as the tail-heaviness of the probability assignments (t2 = 4 > 1). The theoretical background as well as our treatment of the softmax layer of the neural networks are developed in later sections. In particular, we show that special discrete choices of the temperatures result in a large variety of divergences commonly used in machine learning. As we show in our experiments, tuning the two temperatures as continuous parameters is crucial. 1.3 Summary of the experiments We perform experiments by adding synthetic label noise to MNIST and CIFAR-100 datasets and compare the results of our robust bi-tempered loss to the vanilla logistic loss. Our bi-tempered loss is significantly more robust to label noise (when trained on noisy data and test accuracy is measured w.r.t. the clean data): It provides 98.56% and 62.55% accuracy on MNIST and CIFAR-100, respectively, when trained with 40% label noise (compared to 97.64% and 53.17%, respectively, obtained using logistic loss). The bi-tempered loss also yields improvement over the state-of-the-art results on the ImageNet-2012 dataset using both the Resnet18 and Resnet50 architectures (see Table 2). 2 Preliminaries 2.1 Convex duality and Bregman divergences on the simplex We start by briefly reviewing some basic background in convex analysis. For a continuouslydifferentiable strictly convex function F : D Ñ R, with convex domain D Ď Rk, the Bregman divergence [3] between y, ŷ P D induced by F is defined as ∆F(y, ŷ) = F(y) – F(ŷ) – (y – ŷ) ¨ f (ŷ) , where f (ŷ) := ∇F(ŷ) denotes the gradient of F at ŷ (sometimes called the link function of F). Clearly ∆F(y, ŷ) ě 0 and ∆F(y, ŷ) = 0 iff y = ŷ. Also the Bregman divergence is always convex in the first argument and ∇y ∆F(y, ŷ) = f (y)– f (ŷ), but not generally in its second argument. Bregman divergence generalizes many well-known divergences such as the squared Euclidean ∆F(y, ŷ) = 1 2 }y – ŷ}22 (with F(y) = 1 2 }y}22) and the Kullback–Leibler divergence ∆F(y, ŷ) = ř i(yi log yi ŷi – yi + ŷi) (with F(y)= ř i(yi log yi – yi)). The Bregman divergence is typically not symmetric, i.e. ∆F(y, ŷ)‰∆F(ŷ, y). Additionally, the Bregman divergence is invariant to adding affine functions to the convex function F: ∆F+A(y, ŷ) = ∆F(y, ŷ), where A(y) = b + c ¨ y for arbitrary b P R, c P R k. For every differentiable strictly convex function F (with domain D Ď Rk+), there exists a convex dual F˚ : D˚ Ñ R function such that for dual parameter pairs (y, a), a P D˚, the following holds: a = f (y) and y = f ˚(a) = ∇F˚(a) = f –1(a). However, we are mainly interested in the dual of the function F when the domain is restricted to the probability simplex Sk := {y P Rk+| řk i=1 yi = 1}. Let F̌˚ : Ď˚ Ñ R denote the convex conjugate of the restricted function F : D X Sk Ñ R, F̌˚(a) = sup y1PDXSk ( y 1 ¨ a – F(y1) ) = sup y1PD inf λPR ( y 1 ¨ a – F(y1) + λ (1 – k ÿ i=1 y1i) ) , where we introduced a Lagrange multiplier λ P R to enforce the linear constraint řk i=1 y 1 i = 1. At the optimum, the following relationships hold between the primal and dual variables: f (y) = a – λ(a) 1 and y = f –1 ( a – λ(a) 1 ) = f̌ ˚(a) , (3) where λ(a) is chosen so that it satisfies the constraint. Note the dependence of the optimum λ on a. 2.2 Matching losses Next, we recall the notion of a matching loss [11, 12, 4, 17]. It arises as a natural way of defining a loss function over activations â P Rk, by first mapping them to a probability distribution over class labels using a transfer function s : Rk Ñ Sk, and then computing a divergence ∆F between this distribution and the correct target labels. The idea behind the following definition is to “match” the transfer function and the divergence via duality.4 Definition 1 (Matching Loss). Let F : Sk Ñ R be a continuously-differentiable, strictly convex function and let s : Rk Ñ Sk be a transfer function such that ŷ = s(â) denotes the predicted probability distribution based on the activations â. Then the loss function LF(â | y) := ∆F(y, s(â)) , is called the matching loss for s, if s = f̌ ˚ = ∇F̌˚. Note that f̌ ˚ is no longer one-to-one since f̌ ˚(â + R 1) = f̌ ˚(â) (see Appendix D for more details). However, w.l.o.g. we can constrain the domain of the function to â P dom(f̌ ˚) X {a1 P Rk | a1 ¨ 1 = 0} to obtain a one-to-one mapping. The matching loss is useful due to the following property. Proposition 1. The matching loss LF(â | y) is convex w.r.t. the activations â P dom(f̌ ˚) X {a1 P Rk | a 1 ¨ 1 = 0}. Proof. Note that F̌˚ is a strictly convex function and the following relation holds between the divergences induced by F and F̌˚ (see proof of Proposition 4 in Appendix D): ∆F ( y, ŷ ) = ∆F̌˚ ( (f̌ ˚)–1(ŷ), (f̌ ˚)–1(y) ) . (4) Thus for any â in the range of (f̌ ˚)–1, ∆F ( y, f̌ ˚(â) ) = ∆F̌˚ ( â, (f̌ ˚)–1(y) ) . The claim now follows from the convexity of ∆F̌˚ w.r.t. its first argument. The original motivating example for the matching loss was the logistic loss [11, 12]. It can be obtained as the matching loss for the softmax function ŷi = [f̌ ˚(â)]i = exp(âi) řk j=1 exp(âj) , which corresponds to the relative entropy (KL) divergence LF(â | y) = ∆F ( y, f̌ ˚(â) ) = k ÿ i=1 yi (log yi – log ŷi) = k ÿ i=1 ( yi log yi – yi âi) ) + log ( k ÿ i=1 exp(âi) ) , induced from the negative entropy function F(y) = řk i=1(yi log yi – yi). We next define a family of convex functions Ft parameterized by a temperature t ě 0. The matching loss LFt (â | y) = ∆Ft ( y, f̌ ˚t (â) ) for the link function f̌ ˚t of F̌ ˚ t is convex in the activations â. However, by letting the temperature t2 of f̌ ˚ t2 be larger than the temperature t1 of Ft1 , we construct bounded non-convex losses with heavy-tailed transfer functions. 3 Tempered Matching Loss We start by introducing a generalization of the relative entropy divergence, denoted by ∆Ft , induced by a strictly convex function Ft : R k + Ñ R with a temperature parameter t ě 0. The convex function Ft is chosen so that its gradient takes the form 5 ft(y) := ∇Ft(y) = logt y. Via simple integration, we obtain that Ft(y) = k ÿ i=1 ( yi logt yi + 1 2–t (1 – y2–ti ) ) . Indeed, Ft is a convex function since ∇ 2Ft(y) = diag(y –t) ľ 0 for any y P Rk+. In fact, Ft is strongly convex, for 0 ď t ď 1: Lemma 1. The function Ft, with 0 ď t ď 1, is B –t–strongly convex over the set {y P Rk+ : }y}2–t ď B} w.r.t. the L2–t-norm. 4Originally in [11, 12], the matching loss was defined as a simple integral over the transfer function s = f –1: LF(â | y) = ş â s–1(y) (s(z) – y)¨d z. Our new duality based definition handles additional linear constraints. 5Here, the logt function is applied elementwise. See Appendix B for a proof. The Bregman divergence induced by Ft is then given by ∆Ft (y, ŷ) = k ÿ i=1 ( yi logt yi – yi logt ŷi – 1 2–t y2–ti + 1 2–t ŷ2–ti ) = k ÿ i=1 ( 1 (1–t)(2–t) y2–ti – 1 1–t yiŷ 1–t i + 1 2–t ŷ2–ti ) . (5) The second form may be recognized as β-divergence [5] with parameter β = 2 – t. The divergence (5) includes many well-known divergences such as squared Euclidean, KL, and Itakura-Saito divergence as special cases. A list of additional special cases is given in Table 3 of Appendix C. The following corollary is the direct consequence of the strong convexity of Ft. Corollary 1. Let max(}y}2–t, }ŷ}2–t) ď B for 0 ď t < 1. Then 1 2Bt }y – ŷ}22–t ď ∆Ft (y, ŷ) ď Bt 2 (1 – t)2 }y1–t – ŷ1–t}22–t 1–t . See Appendix B for a proof. Thus for 0 ď t < 1, ∆Ft (y, ŷ) is upper-bounded by 2 B2–t (1–t)2 . Note that boundedness on the simplex also implies boundedness in the L2–t-ball. Thus, Corollary 1 immediately implies the boundedness of the divergence ∆Ft (y, ŷ) with 0 ď t < 1 over the simplex. Alternate parameterizations of the family {Ft} of convex functions and their corresponding Bregman divergences are discussed in Appendix C. 3.1 Tempered softmax function Now, let us consider the convex function Ft(y) when its domain is restricted to the probability simplex Sk. We denote the constrained dual of Ft(y) by F̌ ˚ t (a), F̌˚t (a) = sup y1PSk ( y 1 ¨ a – Ft(y 1) ) = sup y1PRk+ inf λtPR ( y 1 ¨ a – Ft(y 1) + λt ( 1 – k ÿ i=1 y1i )) . (6) Following our discussion in Section 2.1 and using (3), the transfer function induced by F̌˚t is 6 y = expt ( a – λt(a) 1 ) , with λt(a) s.t. k ÿ i=1 expt ( ai – λt(a) ) = 1. (7) 3.2 Matching loss of tempered softmax Finally, we derive the matching loss function LFt . Plugging in (7) into (5), we have Lt(â | y) = ∆Ft ( y, expt(â – λt(â) 1) ) . Recall that by Proposition 1, this loss is convex in activations â P dom(f̌ ˚) X {a1 P Rk | a1 ¨ 1 = 0}. In general, λt(a) does not have a closed form solution. However, it can be easily approximated via an iterative method, e.g., a binary search. An alternative (fixed-point) algorithm for computing λt(a) for t > 1 is given in Algorithm 1 of Appendix A. 4 Robust Bi-Tempered Logistic Loss A more interesting class of loss functions can be obtained by introducing a “mismatch” between the temperature of the divergence function (5) and the temperature of the probability assignment function, i.e. the tempered softmax (7). That is, we consider loss functions of the following type: @ 0ď t1 < 1< t2 : L t2 t1 (â | y) := ∆Ft1 ( y, expt2 (â – λt2 (â)1) ) ,with λt(â) s.t. k ÿ i=1 expt ( ai – λt(a) ) =1. (8) We call this the Bi-Tempered Logistic Loss. As illustrated in our two-dimensional example in Section 1, both properties are crucial for handling noisy examples. The derivative of the bi-tempered loss is given in Appendix E. In the following, we discuss the properties of this loss for classification. 6Note that due to the simplex constraint, the link function y = f̌ ˚t (a) = ∇F̌ ˚ t (a) = expt ( a – λt(a) 1 ) is different from f –1t (a) = f ˚ t (a) = ∇F ˚ t (a) = expt(a), i.e., the gradient of the unconstrained dual. 4.1 Properness and Monte-Carlo sampling Let PUK(x, y) denote the (unknown) joint probability distribution of the observed variable x P R m and the class label y P [k]. The goal of discriminative learning is to approximate the posterior distribution of the labels PUK(y | x) via a parametric model P(y | x;Θ) parameterized by Θ. Thus the model fitting can be expressed as minimizing the following expected loss between the data and the model’s label probabilities EPUK(x) [ ∆ ( PUK(y | x), P(y | x;Θ) ) ] , (9) where ∆ ( PUK(y | x), P(y | x;Θ) ) is any divergence measure between PUK(y | x) and P(y | x;Θ). We use ∆ := ∆Ft1 as the divergence and P(i | x;Θ) := P(y = i | x;Θ) = expt2 (âi – λt2 (â)), where â is the activation vector of the last layer given input x and Θ is the set of all weights of the network. Ignoring the constant terms w.r.t. Θ, our loss (9) becomes EPUK(x) [ ÿ i ( – PUK(i | x) logt P(i | x;Θ) + 1 2 – t P(i | x;Θ)2–t ) ] (10a) = –EPUK(x,y) [ logt P(y | x;Θ) ] + EPUK(x) [ 1 2 – t ÿ i P(i | x;Θ)2–t ) ] (10b) « 1 N ÿ n ( – logt P(yn | xn;Θ) + 1 2 – t ÿ i P(i | xn;Θ) 2–t ) , (10c) where from (10b) to (10c), we perform a Monte-Carlo approximation of the expectation w.r.t. PUK(x, y) using samples {(xn, yn)} N n=1. Thus, (10c) is an unbiased approximate of the expected loss (9), thus is a proper loss [20]. Following the same approximation steps for the Tsallis divergence used in [2], we have EPUK(x) [ – ÿ i PUK(i | x) logt P(i | x;Θ) PUK(i | x) looooooooooooooooomooooooooooooooooon ∆Tsallist ( PUK(y|x), P(y|x;Θ) ) ] « – 1 N ÿ n logt P(yn | xn;Θ) PUK(yn | xn) , which, due to the fact that logt a b ‰ logt a – logt b in general, requires access to the (unknown) conditional distribution PUK(y | x). In this case the approximation – 1 N ř n logt P(yn | xn;Θ) proposed in [2] by setting PUK(yn | xn) to 1 is not an unbiased estimator of (9) and therefore, not proper. 4.2 Bayes-risk consistency Another important property of a multiclass loss is the Bayes-risk consistency [19]. Bayes-risk consistency of the two-temperature logistic loss based on the Tsallis divergence was shown in [2]. As expected, the tempered Bregman loss (8) is also Bayes-risk consistent even in the non-convex case. Proposition 2. The multiclass bi-tempered logistic loss Lt2t1 (â | y) is Bayes-risk consistent. 5 Experiments We demonstrate the practical utility of the bi-tempered logistic loss function on a wide variety of image classification tasks. For moderate-size experiments, we use MNIST dataset of handwritten digits [14] and CIFAR-100, which contains real-world images from 100 different classes [13]. We use ImageNet-2012 [6] for large scale image classification, having 1000 classes. All experiments are carried out using the TensorFlow [1] framework. We use P100 GPU’s for small-scale experiments and Cloud TPU-v2 for larger scale ImageNet-2012 experiments. An implementation of the bi-tempered logistic loss is available online at: https://github.com/google/bi-tempered-loss. 5.1 Corrupted labels experiments For our moderate size datasets, i.e. MNIST and CIFAR-100, we introduce noise by artificially corrupting a fraction of the labels and producing a new set of labels for each noise level. For all experiments, we compare our bi-tempered loss function against the logistic loss. For MNIST, we use a CNN with two convolutional layers of size 32 and 64 with a mask size of 5, followed by two fully-connected layers of size 1024 and 10. We apply max-pooling after each convolutional layer with a window size equal to 2 and use dropout during training with keep probability equal to 0.75. We use the AdaDelta optimizer [21] with 500 epochs and batch size of 128 for training. For CIFAR-100, we use a Resnet-56 [10] model without batch norm from [9] with SGD + momentum optimizer trained for 50k steps with batch size of 128 and use the standard learning rate stair case decay schedule. For both experiments, we report the test accuracy of the checkpoint which yields the highest accuracy on an identically label-noise corrupted validation set. We search over a set of learning rates for each experiment. For both experiments, we exhaustively search over a number of temperatures within the range [0.5, 1) and (1.0, 4.0] for t1 and t2, respectively. The results are presented in Table 1 where we report the top-1 accuracy on a clean test set. As can be seen, the bi-tempered loss outperforms the logistic loss for all noise levels (including the noise-free case for CIFAR-100). Using our bi-tempered loss function the model is able to continue to perform well even for high levels of label noise whereas the accuracy of the logistic loss drops immediately with a much smaller level of noise. 5.2 Large scale experiments We train state-of-the-art Resnet-18 and Resnet-50 models on the ImageNet-2012 dataset. Note that the ImageNet-2012 dataset is inherently noisy due to some amount of mislabeling. We train on a 4x4 CloudTPU-v2 device with a batch size of 4096. All experiments were trained for 180 epochs, and use the SGD + momentum optimizer with staircase learning rate decay schedule. The results are presented in Table 2. For both architectures we see a significant gain in the top-1 accuracy using the robust bi-tempered loss. 6 Conclusion and Future Work Neural networks on large standard datasets have been optimized along with a large variety of variables such as architecture, transfer function, choice of optimizer, and label smoothing to name just a few. We proposed a new variant by training the network with tunable loss functions. We do this by first developing convex loss functions based on temperature dependent logarithm and exponential functions. When both temperatures are the same, then a construction based on the notion of “matching loss” leads to loss functions that are convex in the last layer. However by letting the temperature of the new tempered softmax function be larger than the temperature of the tempered log function used in the divergence, we construct tunable losses that are non-convex in the last layer. Our construction remedies two issues simultaneously: we construct bounded tempered loss functions that can handle large-margin outliers and introduce heavy-tailedness in our new tempered softmax function that seems to handle small-margin mislabeled examples. At this point, we simply took a number of benchmark datasets and networks for these datasets that have been heavily optimized for the logistic loss paired with vanilla softmax and simply replaced the loss in the last layer by our new construction. By simply trying a number of temperature pairs, we already achieved significant improvements. We believe that with a systematic “joint optimization” of all commonly tried variables, significant further improvements can be achieved. This is of course a more long-term goal. We also plan to explore the idea of annealing the temperature parameters over the training process. Acknowledgement We would like to thank Jerome Rony for pointing out that early stopping improves the accuracy of the logistic loss on the noisy MNIST experiment. This research was partially supported by the NSF grant IIS-1546459.
1. What are the novel aspects of the proposed loss functions in comparison to existing ones? 2. How do the theoretical analyses support the validation of the proposed loss functions? 3. What are the statistical properties of the proposed loss functions, and how do they compare to existing ones? 4. How robust are the proposed loss functions to noise and outliers, both theoretically and empirically? 5. How does the non-convexity of the proposed loss function affect the analysis of the performance of the learning algorithm, and what implications does it have for practical applications?
Review
Review The proposed loss functions seem novel and theoretical analysis are well presented to support their validation. These Bi-Tempered Logistic loss functions are variants of existing ones. They are derived by introducing a temperature into the exponential function and also by replacing the softmax with a high temperature generalization. Besides all these well presented derivation, I’d like to see some statistical properties of these functions, and how they are compared to the existing ones. The authors claim the proposed loss functions are robust to noise and outliers. The authors are encouraged to present more theoretical & emperical analysis on this part. This new loss function is not convex. Although the conventional logistic loss is not convex with respect to some parameters if neural network is used, its convexity still enables researchers to theoretically analyze the performance of the learning algorithm. If this new non-convex function is used, is the analysis still possible?
NIPS
Title Robust Bi-Tempered Logistic Loss Based on Bregman Divergences Abstract We introduce a temperature into the exponential function and replace the softmax output layer of the neural networks by a high-temperature generalization. Similarly, the logarithm in the loss we use for training is replaced by a low-temperature logarithm. By tuning the two temperatures, we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural networks by our bi-temperature generalization of the logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large datasets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method that uses the Tsallis divergence. N/A We introduce a temperature into the exponential function and replace the softmax output layer of the neural networks by a high-temperature generalization. Similarly, the logarithm in the loss we use for training is replaced by a low-temperature logarithm. By tuning the two temperatures, we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural networks by our bi-temperature generalization of the logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large datasets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method that uses the Tsallis divergence. 1 Introduction The logistic loss, also known as the softmax loss, has been the standard choice in training deep neural networks for classification. The loss involves the application of the softmax function on the activations of the last layer to form the class probabilities followed by the relative entropy (aka the Kullback-Leibler (KL) divergence) between the true labels and the predicted probabilities. The logistic loss is known to be a convex function of the activations (and consequently, the weights) of the last layer. Although desirable from an optimization standpoint, convex losses have been shown to be prone to outliers [15] as the loss of each individual example unboundedly increases as a function of the activations. These outliers may correspond to extreme examples that lead to large gradients, or misclassified training examples that are located far away from the classification boundary. Requiring a convex loss function at the output layer thus seems somewhat arbitrary, in particular since convexity in the last layer’s activations does not guarantee convexity with respect to the parameters of the network outside the last layer. Another issue arises due to the exponentially decaying tail of the softmax function that assigns probabilities to the classes. In the presence of mislabeled training examples near the classification boundary, the short tail of the softmax probabilities enforces the classifier to stretch the decision boundary towards the noisy training examples. In contrast, heavy-tailed alternatives for the softmax probabilities have been shown to significantly improve the robustness of the loss to these examples [8]. The logistic loss is essentially the negative logarithm of the predicted class probabilities, which are computed as the normalized exponentials of the inputs. In this paper, we tackle both shortcomings of the logistic loss, pertaining to its convexity as well as its tail-lightness, by replacing the logarithm and the exponential functions with their corresponding “tempered” versions. We define the function 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. logt : R+ Ñ R with temperature parameter t ě 0 as in [16]: logt(x) := 1 1 – t (x1–t – 1) . (1) The logt function is monotonically increasing and concave. The standard (natural) logarithm is recovered at the limit t Ñ 1. Unlike the standard log, the logt function is bounded from below by –1/(1 – t) for 0 ď t < 1. This property will be used to define bounded loss functions that are significantly more robust to outliers. Similarly, our heavy-tailed alternative for the softmax function is based on the tempered exponential function. The function expt : R Ñ R+ with temperature t ě 0 is defined as the inverse1 of logt, that is, expt(x) := [1 + (1 – t) x] 1/(1–t) + , (2) where [ ¨ ]+ = max{ ¨ , 0}. The standard exp function is again recovered at the limit t Ñ 1. Compared to the exp function, a heavier tail (for negative values of x) is achieved for t > 1. We use this property to define heavy-tailed analogues of softmax probabilities at the output layer. The vanilla logistic loss can be viewed as a logarithmic (relative entropy) divergence that operates on a “matching” exponential (softmax) probability assignment [11, 12]. Its convexity then stems from classical convex duality, using the fact that the probability assignment function is the gradient of the dual function to the negative entropy on the simplex. When the logt1 and expt2 are substituted instead, this duality still holds whenever t1 = t2, albeit with a different Bregman divergence, and the induced loss remains convex2. However, for t1 < t2, the loss becomes non-convex in the output activations. In particular, 0 ď t1 < 1 leads to a bounded loss, while t2 > 1 provides tail-heaviness. Figure 1 illustrates the tempered logt and expt functions as well as examples of our proposed bi-tempered logistic loss function for a two-class problem expressed as a function of the activation of the first class. The true label is assumed to be class one. Tempered generalizations of the logistic regression have been introduced before [7, 8, 22, 2]. The most recent two-temperature method [2] is based on the Tsallis divergence and contains all the previous methods as special cases. However, the Tsallis based divergences do not result in proper loss functions. In contrast, we show that the Bregman based construction introduced in this paper is indeed proper, which is a requirement for many real-world applications. 1.1 Our replacement of the softmax output layer in neural networks Consider an arbitrary classification model with multiclass softmax output. We are given training examples of the form (x, y), where x is a fixed dimensional input vector and the target y is a probability vector over k classes. In practice, the targets are often one-hot encoded binary vectors in k dimensions. Each input x is fed to the model, resulting in a vector z of inputs to the final softmax layer. This layer typically has one trainable weight vector wi per class i and yields the predicted class probability ŷi = exp(âi) řk j=1 exp(âj) = exp ( âi – log k ÿ j=1 exp(âj) ) , for linear activation âi = wi ¨ z for class i. 1When 0 ď t < 1, the domain of expt needs to be restricted to –1/(1 – t) ď x for the inverse property to hold. 2In a restricted domain when t1 = t2 < 1, as discussed later. We first replace the softmax function by a generalized heavy-tailed version that uses the expt2 function with t2 > 1, which we call the tempered softmax function: ŷi = expt2 ( âi – λt2 (â) ) , where λt2 (â) P R is s.t. k ÿ j=1 expt2 ( âj – λt2 (â) ) = 1 . This requires computing the normalization value λt2 (â) (for each example) via a binary search or an iterative procedure like the one given in Appendix A. The relative entropy between the true label y and prediction ŷ is replaced by the tempered version with temperature range 0 ď t1 < 1, k ÿ i=1 ( yi (logt1 yi – logt1 ŷi) – 1 2–t1 (y2–t1i – ŷ 2–t1 i ) ) if y one-hot = – logt1 ŷc – 1 2–t1 ( 1 – k ÿ i=1 ŷ2–t1i ) . where c = argmaxi yi is the index of the one-hot class. In later sections we prove various properties of this loss. When t1 = t2 = 1, then it reduces to the vanilla relative entropy loss with softmax. Also when 0 ď t1 < 1, then the loss is bounded, while t2 > 1 gives the tempered softmax function a heavier tail. 1.2 An illustration We provide some intuition on why both boundedness of the loss as well as tail-heaviness of the tempered softmax are crucial for robustness. For this, we train a small two-layer feed-forward neural network on a synthetic binary classification problem in two dimensions. The network has 10 and 5 units in the first and second layer, respectively3. Figure 2(a) shows the results of the logistic and our bi-tempered logistic loss on the noise-free dataset. The network converges to a desirable classification boundary (the white stripe in the figure) using both loss functions. In Figure 2(b), we illustrate the effect of adding small-margin label noise to the training examples, targeting those examples that reside near the noise-free classification boundary. The logistic loss clearly follows the noisy examples by stretching the classification boundary. On the other hand, using only the tail-heavy tempered softmax function (t2 = 4 while t1 = 1, i.e. KL divergence as the divergence) can handle the noisy examples by producing more uniform class probabilities. Next, we show the effect of large-margin noisy examples in Figure 2(c), targeting examples that are located far away from the noise-free classification boundary. The convexity of the logistic loss causes the network to be highly affected by the noisy examples that are located far away from the boundary. In contrast, only the boundedness of the loss (t1 = 0.2 while t2 = 1, meaning that the outputs are vanilla softmax probabilities) reduces the 3An interactive visualization of the bi-tempered loss is available at: https://google.github.io/ bi-tempered-loss/ effect of the outliers by allocating at most a finite amount of loss to each example. Finally, we show the effect of random label noise that includes both small-margin and large-margin noisy examples in Figure 2(d). Clearly, the logistic loss fails to handle the noise, while our bi-tempered logistic loss successfully recovers the appropriate boundary. Note for random noise, we exploit both boundedness of the loss (t1 = 0.2 < 1) as well as the tail-heaviness of the probability assignments (t2 = 4 > 1). The theoretical background as well as our treatment of the softmax layer of the neural networks are developed in later sections. In particular, we show that special discrete choices of the temperatures result in a large variety of divergences commonly used in machine learning. As we show in our experiments, tuning the two temperatures as continuous parameters is crucial. 1.3 Summary of the experiments We perform experiments by adding synthetic label noise to MNIST and CIFAR-100 datasets and compare the results of our robust bi-tempered loss to the vanilla logistic loss. Our bi-tempered loss is significantly more robust to label noise (when trained on noisy data and test accuracy is measured w.r.t. the clean data): It provides 98.56% and 62.55% accuracy on MNIST and CIFAR-100, respectively, when trained with 40% label noise (compared to 97.64% and 53.17%, respectively, obtained using logistic loss). The bi-tempered loss also yields improvement over the state-of-the-art results on the ImageNet-2012 dataset using both the Resnet18 and Resnet50 architectures (see Table 2). 2 Preliminaries 2.1 Convex duality and Bregman divergences on the simplex We start by briefly reviewing some basic background in convex analysis. For a continuouslydifferentiable strictly convex function F : D Ñ R, with convex domain D Ď Rk, the Bregman divergence [3] between y, ŷ P D induced by F is defined as ∆F(y, ŷ) = F(y) – F(ŷ) – (y – ŷ) ¨ f (ŷ) , where f (ŷ) := ∇F(ŷ) denotes the gradient of F at ŷ (sometimes called the link function of F). Clearly ∆F(y, ŷ) ě 0 and ∆F(y, ŷ) = 0 iff y = ŷ. Also the Bregman divergence is always convex in the first argument and ∇y ∆F(y, ŷ) = f (y)– f (ŷ), but not generally in its second argument. Bregman divergence generalizes many well-known divergences such as the squared Euclidean ∆F(y, ŷ) = 1 2 }y – ŷ}22 (with F(y) = 1 2 }y}22) and the Kullback–Leibler divergence ∆F(y, ŷ) = ř i(yi log yi ŷi – yi + ŷi) (with F(y)= ř i(yi log yi – yi)). The Bregman divergence is typically not symmetric, i.e. ∆F(y, ŷ)‰∆F(ŷ, y). Additionally, the Bregman divergence is invariant to adding affine functions to the convex function F: ∆F+A(y, ŷ) = ∆F(y, ŷ), where A(y) = b + c ¨ y for arbitrary b P R, c P R k. For every differentiable strictly convex function F (with domain D Ď Rk+), there exists a convex dual F˚ : D˚ Ñ R function such that for dual parameter pairs (y, a), a P D˚, the following holds: a = f (y) and y = f ˚(a) = ∇F˚(a) = f –1(a). However, we are mainly interested in the dual of the function F when the domain is restricted to the probability simplex Sk := {y P Rk+| řk i=1 yi = 1}. Let F̌˚ : Ď˚ Ñ R denote the convex conjugate of the restricted function F : D X Sk Ñ R, F̌˚(a) = sup y1PDXSk ( y 1 ¨ a – F(y1) ) = sup y1PD inf λPR ( y 1 ¨ a – F(y1) + λ (1 – k ÿ i=1 y1i) ) , where we introduced a Lagrange multiplier λ P R to enforce the linear constraint řk i=1 y 1 i = 1. At the optimum, the following relationships hold between the primal and dual variables: f (y) = a – λ(a) 1 and y = f –1 ( a – λ(a) 1 ) = f̌ ˚(a) , (3) where λ(a) is chosen so that it satisfies the constraint. Note the dependence of the optimum λ on a. 2.2 Matching losses Next, we recall the notion of a matching loss [11, 12, 4, 17]. It arises as a natural way of defining a loss function over activations â P Rk, by first mapping them to a probability distribution over class labels using a transfer function s : Rk Ñ Sk, and then computing a divergence ∆F between this distribution and the correct target labels. The idea behind the following definition is to “match” the transfer function and the divergence via duality.4 Definition 1 (Matching Loss). Let F : Sk Ñ R be a continuously-differentiable, strictly convex function and let s : Rk Ñ Sk be a transfer function such that ŷ = s(â) denotes the predicted probability distribution based on the activations â. Then the loss function LF(â | y) := ∆F(y, s(â)) , is called the matching loss for s, if s = f̌ ˚ = ∇F̌˚. Note that f̌ ˚ is no longer one-to-one since f̌ ˚(â + R 1) = f̌ ˚(â) (see Appendix D for more details). However, w.l.o.g. we can constrain the domain of the function to â P dom(f̌ ˚) X {a1 P Rk | a1 ¨ 1 = 0} to obtain a one-to-one mapping. The matching loss is useful due to the following property. Proposition 1. The matching loss LF(â | y) is convex w.r.t. the activations â P dom(f̌ ˚) X {a1 P Rk | a 1 ¨ 1 = 0}. Proof. Note that F̌˚ is a strictly convex function and the following relation holds between the divergences induced by F and F̌˚ (see proof of Proposition 4 in Appendix D): ∆F ( y, ŷ ) = ∆F̌˚ ( (f̌ ˚)–1(ŷ), (f̌ ˚)–1(y) ) . (4) Thus for any â in the range of (f̌ ˚)–1, ∆F ( y, f̌ ˚(â) ) = ∆F̌˚ ( â, (f̌ ˚)–1(y) ) . The claim now follows from the convexity of ∆F̌˚ w.r.t. its first argument. The original motivating example for the matching loss was the logistic loss [11, 12]. It can be obtained as the matching loss for the softmax function ŷi = [f̌ ˚(â)]i = exp(âi) řk j=1 exp(âj) , which corresponds to the relative entropy (KL) divergence LF(â | y) = ∆F ( y, f̌ ˚(â) ) = k ÿ i=1 yi (log yi – log ŷi) = k ÿ i=1 ( yi log yi – yi âi) ) + log ( k ÿ i=1 exp(âi) ) , induced from the negative entropy function F(y) = řk i=1(yi log yi – yi). We next define a family of convex functions Ft parameterized by a temperature t ě 0. The matching loss LFt (â | y) = ∆Ft ( y, f̌ ˚t (â) ) for the link function f̌ ˚t of F̌ ˚ t is convex in the activations â. However, by letting the temperature t2 of f̌ ˚ t2 be larger than the temperature t1 of Ft1 , we construct bounded non-convex losses with heavy-tailed transfer functions. 3 Tempered Matching Loss We start by introducing a generalization of the relative entropy divergence, denoted by ∆Ft , induced by a strictly convex function Ft : R k + Ñ R with a temperature parameter t ě 0. The convex function Ft is chosen so that its gradient takes the form 5 ft(y) := ∇Ft(y) = logt y. Via simple integration, we obtain that Ft(y) = k ÿ i=1 ( yi logt yi + 1 2–t (1 – y2–ti ) ) . Indeed, Ft is a convex function since ∇ 2Ft(y) = diag(y –t) ľ 0 for any y P Rk+. In fact, Ft is strongly convex, for 0 ď t ď 1: Lemma 1. The function Ft, with 0 ď t ď 1, is B –t–strongly convex over the set {y P Rk+ : }y}2–t ď B} w.r.t. the L2–t-norm. 4Originally in [11, 12], the matching loss was defined as a simple integral over the transfer function s = f –1: LF(â | y) = ş â s–1(y) (s(z) – y)¨d z. Our new duality based definition handles additional linear constraints. 5Here, the logt function is applied elementwise. See Appendix B for a proof. The Bregman divergence induced by Ft is then given by ∆Ft (y, ŷ) = k ÿ i=1 ( yi logt yi – yi logt ŷi – 1 2–t y2–ti + 1 2–t ŷ2–ti ) = k ÿ i=1 ( 1 (1–t)(2–t) y2–ti – 1 1–t yiŷ 1–t i + 1 2–t ŷ2–ti ) . (5) The second form may be recognized as β-divergence [5] with parameter β = 2 – t. The divergence (5) includes many well-known divergences such as squared Euclidean, KL, and Itakura-Saito divergence as special cases. A list of additional special cases is given in Table 3 of Appendix C. The following corollary is the direct consequence of the strong convexity of Ft. Corollary 1. Let max(}y}2–t, }ŷ}2–t) ď B for 0 ď t < 1. Then 1 2Bt }y – ŷ}22–t ď ∆Ft (y, ŷ) ď Bt 2 (1 – t)2 }y1–t – ŷ1–t}22–t 1–t . See Appendix B for a proof. Thus for 0 ď t < 1, ∆Ft (y, ŷ) is upper-bounded by 2 B2–t (1–t)2 . Note that boundedness on the simplex also implies boundedness in the L2–t-ball. Thus, Corollary 1 immediately implies the boundedness of the divergence ∆Ft (y, ŷ) with 0 ď t < 1 over the simplex. Alternate parameterizations of the family {Ft} of convex functions and their corresponding Bregman divergences are discussed in Appendix C. 3.1 Tempered softmax function Now, let us consider the convex function Ft(y) when its domain is restricted to the probability simplex Sk. We denote the constrained dual of Ft(y) by F̌ ˚ t (a), F̌˚t (a) = sup y1PSk ( y 1 ¨ a – Ft(y 1) ) = sup y1PRk+ inf λtPR ( y 1 ¨ a – Ft(y 1) + λt ( 1 – k ÿ i=1 y1i )) . (6) Following our discussion in Section 2.1 and using (3), the transfer function induced by F̌˚t is 6 y = expt ( a – λt(a) 1 ) , with λt(a) s.t. k ÿ i=1 expt ( ai – λt(a) ) = 1. (7) 3.2 Matching loss of tempered softmax Finally, we derive the matching loss function LFt . Plugging in (7) into (5), we have Lt(â | y) = ∆Ft ( y, expt(â – λt(â) 1) ) . Recall that by Proposition 1, this loss is convex in activations â P dom(f̌ ˚) X {a1 P Rk | a1 ¨ 1 = 0}. In general, λt(a) does not have a closed form solution. However, it can be easily approximated via an iterative method, e.g., a binary search. An alternative (fixed-point) algorithm for computing λt(a) for t > 1 is given in Algorithm 1 of Appendix A. 4 Robust Bi-Tempered Logistic Loss A more interesting class of loss functions can be obtained by introducing a “mismatch” between the temperature of the divergence function (5) and the temperature of the probability assignment function, i.e. the tempered softmax (7). That is, we consider loss functions of the following type: @ 0ď t1 < 1< t2 : L t2 t1 (â | y) := ∆Ft1 ( y, expt2 (â – λt2 (â)1) ) ,with λt(â) s.t. k ÿ i=1 expt ( ai – λt(a) ) =1. (8) We call this the Bi-Tempered Logistic Loss. As illustrated in our two-dimensional example in Section 1, both properties are crucial for handling noisy examples. The derivative of the bi-tempered loss is given in Appendix E. In the following, we discuss the properties of this loss for classification. 6Note that due to the simplex constraint, the link function y = f̌ ˚t (a) = ∇F̌ ˚ t (a) = expt ( a – λt(a) 1 ) is different from f –1t (a) = f ˚ t (a) = ∇F ˚ t (a) = expt(a), i.e., the gradient of the unconstrained dual. 4.1 Properness and Monte-Carlo sampling Let PUK(x, y) denote the (unknown) joint probability distribution of the observed variable x P R m and the class label y P [k]. The goal of discriminative learning is to approximate the posterior distribution of the labels PUK(y | x) via a parametric model P(y | x;Θ) parameterized by Θ. Thus the model fitting can be expressed as minimizing the following expected loss between the data and the model’s label probabilities EPUK(x) [ ∆ ( PUK(y | x), P(y | x;Θ) ) ] , (9) where ∆ ( PUK(y | x), P(y | x;Θ) ) is any divergence measure between PUK(y | x) and P(y | x;Θ). We use ∆ := ∆Ft1 as the divergence and P(i | x;Θ) := P(y = i | x;Θ) = expt2 (âi – λt2 (â)), where â is the activation vector of the last layer given input x and Θ is the set of all weights of the network. Ignoring the constant terms w.r.t. Θ, our loss (9) becomes EPUK(x) [ ÿ i ( – PUK(i | x) logt P(i | x;Θ) + 1 2 – t P(i | x;Θ)2–t ) ] (10a) = –EPUK(x,y) [ logt P(y | x;Θ) ] + EPUK(x) [ 1 2 – t ÿ i P(i | x;Θ)2–t ) ] (10b) « 1 N ÿ n ( – logt P(yn | xn;Θ) + 1 2 – t ÿ i P(i | xn;Θ) 2–t ) , (10c) where from (10b) to (10c), we perform a Monte-Carlo approximation of the expectation w.r.t. PUK(x, y) using samples {(xn, yn)} N n=1. Thus, (10c) is an unbiased approximate of the expected loss (9), thus is a proper loss [20]. Following the same approximation steps for the Tsallis divergence used in [2], we have EPUK(x) [ – ÿ i PUK(i | x) logt P(i | x;Θ) PUK(i | x) looooooooooooooooomooooooooooooooooon ∆Tsallist ( PUK(y|x), P(y|x;Θ) ) ] « – 1 N ÿ n logt P(yn | xn;Θ) PUK(yn | xn) , which, due to the fact that logt a b ‰ logt a – logt b in general, requires access to the (unknown) conditional distribution PUK(y | x). In this case the approximation – 1 N ř n logt P(yn | xn;Θ) proposed in [2] by setting PUK(yn | xn) to 1 is not an unbiased estimator of (9) and therefore, not proper. 4.2 Bayes-risk consistency Another important property of a multiclass loss is the Bayes-risk consistency [19]. Bayes-risk consistency of the two-temperature logistic loss based on the Tsallis divergence was shown in [2]. As expected, the tempered Bregman loss (8) is also Bayes-risk consistent even in the non-convex case. Proposition 2. The multiclass bi-tempered logistic loss Lt2t1 (â | y) is Bayes-risk consistent. 5 Experiments We demonstrate the practical utility of the bi-tempered logistic loss function on a wide variety of image classification tasks. For moderate-size experiments, we use MNIST dataset of handwritten digits [14] and CIFAR-100, which contains real-world images from 100 different classes [13]. We use ImageNet-2012 [6] for large scale image classification, having 1000 classes. All experiments are carried out using the TensorFlow [1] framework. We use P100 GPU’s for small-scale experiments and Cloud TPU-v2 for larger scale ImageNet-2012 experiments. An implementation of the bi-tempered logistic loss is available online at: https://github.com/google/bi-tempered-loss. 5.1 Corrupted labels experiments For our moderate size datasets, i.e. MNIST and CIFAR-100, we introduce noise by artificially corrupting a fraction of the labels and producing a new set of labels for each noise level. For all experiments, we compare our bi-tempered loss function against the logistic loss. For MNIST, we use a CNN with two convolutional layers of size 32 and 64 with a mask size of 5, followed by two fully-connected layers of size 1024 and 10. We apply max-pooling after each convolutional layer with a window size equal to 2 and use dropout during training with keep probability equal to 0.75. We use the AdaDelta optimizer [21] with 500 epochs and batch size of 128 for training. For CIFAR-100, we use a Resnet-56 [10] model without batch norm from [9] with SGD + momentum optimizer trained for 50k steps with batch size of 128 and use the standard learning rate stair case decay schedule. For both experiments, we report the test accuracy of the checkpoint which yields the highest accuracy on an identically label-noise corrupted validation set. We search over a set of learning rates for each experiment. For both experiments, we exhaustively search over a number of temperatures within the range [0.5, 1) and (1.0, 4.0] for t1 and t2, respectively. The results are presented in Table 1 where we report the top-1 accuracy on a clean test set. As can be seen, the bi-tempered loss outperforms the logistic loss for all noise levels (including the noise-free case for CIFAR-100). Using our bi-tempered loss function the model is able to continue to perform well even for high levels of label noise whereas the accuracy of the logistic loss drops immediately with a much smaller level of noise. 5.2 Large scale experiments We train state-of-the-art Resnet-18 and Resnet-50 models on the ImageNet-2012 dataset. Note that the ImageNet-2012 dataset is inherently noisy due to some amount of mislabeling. We train on a 4x4 CloudTPU-v2 device with a batch size of 4096. All experiments were trained for 180 epochs, and use the SGD + momentum optimizer with staircase learning rate decay schedule. The results are presented in Table 2. For both architectures we see a significant gain in the top-1 accuracy using the robust bi-tempered loss. 6 Conclusion and Future Work Neural networks on large standard datasets have been optimized along with a large variety of variables such as architecture, transfer function, choice of optimizer, and label smoothing to name just a few. We proposed a new variant by training the network with tunable loss functions. We do this by first developing convex loss functions based on temperature dependent logarithm and exponential functions. When both temperatures are the same, then a construction based on the notion of “matching loss” leads to loss functions that are convex in the last layer. However by letting the temperature of the new tempered softmax function be larger than the temperature of the tempered log function used in the divergence, we construct tunable losses that are non-convex in the last layer. Our construction remedies two issues simultaneously: we construct bounded tempered loss functions that can handle large-margin outliers and introduce heavy-tailedness in our new tempered softmax function that seems to handle small-margin mislabeled examples. At this point, we simply took a number of benchmark datasets and networks for these datasets that have been heavily optimized for the logistic loss paired with vanilla softmax and simply replaced the loss in the last layer by our new construction. By simply trying a number of temperature pairs, we already achieved significant improvements. We believe that with a systematic “joint optimization” of all commonly tried variables, significant further improvements can be achieved. This is of course a more long-term goal. We also plan to explore the idea of annealing the temperature parameters over the training process. Acknowledgement We would like to thank Jerome Rony for pointing out that early stopping improves the accuracy of the logistic loss on the noisy MNIST experiment. This research was partially supported by the NSF grant IIS-1546459.
1. What is the focus of the paper regarding the proposed loss? 2. How does the reviewer assess the novelty of the proposed approach compared to prior works? 3. What are the strengths of the paper regarding its clarity, quality, and theoretical analysis? 4. Do you have any concerns or questions about the significance of the paper's contribution to the community?
Review
Review The authors propose a loss which is controlled by two temperature parameters of generalized logarithm (exponential), one of which comes from the probability model and the other comes from Bregman (beta) divergence. The proposed loss shows robust nature to noises as expected and confirmed by simple numerical experiments with good visualization. The idea looks similar with ref[2] using Bregman divergence instead of Tsallis, therefore the proposal is not surprising. In statistics, it is well known Bregman divergences (not only beta-divergence) leads consistent and robust estimators, and heavy-tailed distributions (not only Bregman-dual link function) are insensitive to outliers, so robustness of the proposed loss looks natural outcome. The former facts are intensively investigated by Shinto Eguchi, Frank Nielsen and their collaborators. The latter facts are found in, for example, a famous book by Huber (1981). Overall organization of the paper is well considered including a good introduction of Bregman divergence, and theoretical discussion of the proposed loss is clear enough, so quality and clarity of the paper is very high. The presented idea is nice, but significance to the community is unclear.
NIPS
Title Robust Bi-Tempered Logistic Loss Based on Bregman Divergences Abstract We introduce a temperature into the exponential function and replace the softmax output layer of the neural networks by a high-temperature generalization. Similarly, the logarithm in the loss we use for training is replaced by a low-temperature logarithm. By tuning the two temperatures, we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural networks by our bi-temperature generalization of the logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large datasets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method that uses the Tsallis divergence. N/A We introduce a temperature into the exponential function and replace the softmax output layer of the neural networks by a high-temperature generalization. Similarly, the logarithm in the loss we use for training is replaced by a low-temperature logarithm. By tuning the two temperatures, we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural networks by our bi-temperature generalization of the logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large datasets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method that uses the Tsallis divergence. 1 Introduction The logistic loss, also known as the softmax loss, has been the standard choice in training deep neural networks for classification. The loss involves the application of the softmax function on the activations of the last layer to form the class probabilities followed by the relative entropy (aka the Kullback-Leibler (KL) divergence) between the true labels and the predicted probabilities. The logistic loss is known to be a convex function of the activations (and consequently, the weights) of the last layer. Although desirable from an optimization standpoint, convex losses have been shown to be prone to outliers [15] as the loss of each individual example unboundedly increases as a function of the activations. These outliers may correspond to extreme examples that lead to large gradients, or misclassified training examples that are located far away from the classification boundary. Requiring a convex loss function at the output layer thus seems somewhat arbitrary, in particular since convexity in the last layer’s activations does not guarantee convexity with respect to the parameters of the network outside the last layer. Another issue arises due to the exponentially decaying tail of the softmax function that assigns probabilities to the classes. In the presence of mislabeled training examples near the classification boundary, the short tail of the softmax probabilities enforces the classifier to stretch the decision boundary towards the noisy training examples. In contrast, heavy-tailed alternatives for the softmax probabilities have been shown to significantly improve the robustness of the loss to these examples [8]. The logistic loss is essentially the negative logarithm of the predicted class probabilities, which are computed as the normalized exponentials of the inputs. In this paper, we tackle both shortcomings of the logistic loss, pertaining to its convexity as well as its tail-lightness, by replacing the logarithm and the exponential functions with their corresponding “tempered” versions. We define the function 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. logt : R+ Ñ R with temperature parameter t ě 0 as in [16]: logt(x) := 1 1 – t (x1–t – 1) . (1) The logt function is monotonically increasing and concave. The standard (natural) logarithm is recovered at the limit t Ñ 1. Unlike the standard log, the logt function is bounded from below by –1/(1 – t) for 0 ď t < 1. This property will be used to define bounded loss functions that are significantly more robust to outliers. Similarly, our heavy-tailed alternative for the softmax function is based on the tempered exponential function. The function expt : R Ñ R+ with temperature t ě 0 is defined as the inverse1 of logt, that is, expt(x) := [1 + (1 – t) x] 1/(1–t) + , (2) where [ ¨ ]+ = max{ ¨ , 0}. The standard exp function is again recovered at the limit t Ñ 1. Compared to the exp function, a heavier tail (for negative values of x) is achieved for t > 1. We use this property to define heavy-tailed analogues of softmax probabilities at the output layer. The vanilla logistic loss can be viewed as a logarithmic (relative entropy) divergence that operates on a “matching” exponential (softmax) probability assignment [11, 12]. Its convexity then stems from classical convex duality, using the fact that the probability assignment function is the gradient of the dual function to the negative entropy on the simplex. When the logt1 and expt2 are substituted instead, this duality still holds whenever t1 = t2, albeit with a different Bregman divergence, and the induced loss remains convex2. However, for t1 < t2, the loss becomes non-convex in the output activations. In particular, 0 ď t1 < 1 leads to a bounded loss, while t2 > 1 provides tail-heaviness. Figure 1 illustrates the tempered logt and expt functions as well as examples of our proposed bi-tempered logistic loss function for a two-class problem expressed as a function of the activation of the first class. The true label is assumed to be class one. Tempered generalizations of the logistic regression have been introduced before [7, 8, 22, 2]. The most recent two-temperature method [2] is based on the Tsallis divergence and contains all the previous methods as special cases. However, the Tsallis based divergences do not result in proper loss functions. In contrast, we show that the Bregman based construction introduced in this paper is indeed proper, which is a requirement for many real-world applications. 1.1 Our replacement of the softmax output layer in neural networks Consider an arbitrary classification model with multiclass softmax output. We are given training examples of the form (x, y), where x is a fixed dimensional input vector and the target y is a probability vector over k classes. In practice, the targets are often one-hot encoded binary vectors in k dimensions. Each input x is fed to the model, resulting in a vector z of inputs to the final softmax layer. This layer typically has one trainable weight vector wi per class i and yields the predicted class probability ŷi = exp(âi) řk j=1 exp(âj) = exp ( âi – log k ÿ j=1 exp(âj) ) , for linear activation âi = wi ¨ z for class i. 1When 0 ď t < 1, the domain of expt needs to be restricted to –1/(1 – t) ď x for the inverse property to hold. 2In a restricted domain when t1 = t2 < 1, as discussed later. We first replace the softmax function by a generalized heavy-tailed version that uses the expt2 function with t2 > 1, which we call the tempered softmax function: ŷi = expt2 ( âi – λt2 (â) ) , where λt2 (â) P R is s.t. k ÿ j=1 expt2 ( âj – λt2 (â) ) = 1 . This requires computing the normalization value λt2 (â) (for each example) via a binary search or an iterative procedure like the one given in Appendix A. The relative entropy between the true label y and prediction ŷ is replaced by the tempered version with temperature range 0 ď t1 < 1, k ÿ i=1 ( yi (logt1 yi – logt1 ŷi) – 1 2–t1 (y2–t1i – ŷ 2–t1 i ) ) if y one-hot = – logt1 ŷc – 1 2–t1 ( 1 – k ÿ i=1 ŷ2–t1i ) . where c = argmaxi yi is the index of the one-hot class. In later sections we prove various properties of this loss. When t1 = t2 = 1, then it reduces to the vanilla relative entropy loss with softmax. Also when 0 ď t1 < 1, then the loss is bounded, while t2 > 1 gives the tempered softmax function a heavier tail. 1.2 An illustration We provide some intuition on why both boundedness of the loss as well as tail-heaviness of the tempered softmax are crucial for robustness. For this, we train a small two-layer feed-forward neural network on a synthetic binary classification problem in two dimensions. The network has 10 and 5 units in the first and second layer, respectively3. Figure 2(a) shows the results of the logistic and our bi-tempered logistic loss on the noise-free dataset. The network converges to a desirable classification boundary (the white stripe in the figure) using both loss functions. In Figure 2(b), we illustrate the effect of adding small-margin label noise to the training examples, targeting those examples that reside near the noise-free classification boundary. The logistic loss clearly follows the noisy examples by stretching the classification boundary. On the other hand, using only the tail-heavy tempered softmax function (t2 = 4 while t1 = 1, i.e. KL divergence as the divergence) can handle the noisy examples by producing more uniform class probabilities. Next, we show the effect of large-margin noisy examples in Figure 2(c), targeting examples that are located far away from the noise-free classification boundary. The convexity of the logistic loss causes the network to be highly affected by the noisy examples that are located far away from the boundary. In contrast, only the boundedness of the loss (t1 = 0.2 while t2 = 1, meaning that the outputs are vanilla softmax probabilities) reduces the 3An interactive visualization of the bi-tempered loss is available at: https://google.github.io/ bi-tempered-loss/ effect of the outliers by allocating at most a finite amount of loss to each example. Finally, we show the effect of random label noise that includes both small-margin and large-margin noisy examples in Figure 2(d). Clearly, the logistic loss fails to handle the noise, while our bi-tempered logistic loss successfully recovers the appropriate boundary. Note for random noise, we exploit both boundedness of the loss (t1 = 0.2 < 1) as well as the tail-heaviness of the probability assignments (t2 = 4 > 1). The theoretical background as well as our treatment of the softmax layer of the neural networks are developed in later sections. In particular, we show that special discrete choices of the temperatures result in a large variety of divergences commonly used in machine learning. As we show in our experiments, tuning the two temperatures as continuous parameters is crucial. 1.3 Summary of the experiments We perform experiments by adding synthetic label noise to MNIST and CIFAR-100 datasets and compare the results of our robust bi-tempered loss to the vanilla logistic loss. Our bi-tempered loss is significantly more robust to label noise (when trained on noisy data and test accuracy is measured w.r.t. the clean data): It provides 98.56% and 62.55% accuracy on MNIST and CIFAR-100, respectively, when trained with 40% label noise (compared to 97.64% and 53.17%, respectively, obtained using logistic loss). The bi-tempered loss also yields improvement over the state-of-the-art results on the ImageNet-2012 dataset using both the Resnet18 and Resnet50 architectures (see Table 2). 2 Preliminaries 2.1 Convex duality and Bregman divergences on the simplex We start by briefly reviewing some basic background in convex analysis. For a continuouslydifferentiable strictly convex function F : D Ñ R, with convex domain D Ď Rk, the Bregman divergence [3] between y, ŷ P D induced by F is defined as ∆F(y, ŷ) = F(y) – F(ŷ) – (y – ŷ) ¨ f (ŷ) , where f (ŷ) := ∇F(ŷ) denotes the gradient of F at ŷ (sometimes called the link function of F). Clearly ∆F(y, ŷ) ě 0 and ∆F(y, ŷ) = 0 iff y = ŷ. Also the Bregman divergence is always convex in the first argument and ∇y ∆F(y, ŷ) = f (y)– f (ŷ), but not generally in its second argument. Bregman divergence generalizes many well-known divergences such as the squared Euclidean ∆F(y, ŷ) = 1 2 }y – ŷ}22 (with F(y) = 1 2 }y}22) and the Kullback–Leibler divergence ∆F(y, ŷ) = ř i(yi log yi ŷi – yi + ŷi) (with F(y)= ř i(yi log yi – yi)). The Bregman divergence is typically not symmetric, i.e. ∆F(y, ŷ)‰∆F(ŷ, y). Additionally, the Bregman divergence is invariant to adding affine functions to the convex function F: ∆F+A(y, ŷ) = ∆F(y, ŷ), where A(y) = b + c ¨ y for arbitrary b P R, c P R k. For every differentiable strictly convex function F (with domain D Ď Rk+), there exists a convex dual F˚ : D˚ Ñ R function such that for dual parameter pairs (y, a), a P D˚, the following holds: a = f (y) and y = f ˚(a) = ∇F˚(a) = f –1(a). However, we are mainly interested in the dual of the function F when the domain is restricted to the probability simplex Sk := {y P Rk+| řk i=1 yi = 1}. Let F̌˚ : Ď˚ Ñ R denote the convex conjugate of the restricted function F : D X Sk Ñ R, F̌˚(a) = sup y1PDXSk ( y 1 ¨ a – F(y1) ) = sup y1PD inf λPR ( y 1 ¨ a – F(y1) + λ (1 – k ÿ i=1 y1i) ) , where we introduced a Lagrange multiplier λ P R to enforce the linear constraint řk i=1 y 1 i = 1. At the optimum, the following relationships hold between the primal and dual variables: f (y) = a – λ(a) 1 and y = f –1 ( a – λ(a) 1 ) = f̌ ˚(a) , (3) where λ(a) is chosen so that it satisfies the constraint. Note the dependence of the optimum λ on a. 2.2 Matching losses Next, we recall the notion of a matching loss [11, 12, 4, 17]. It arises as a natural way of defining a loss function over activations â P Rk, by first mapping them to a probability distribution over class labels using a transfer function s : Rk Ñ Sk, and then computing a divergence ∆F between this distribution and the correct target labels. The idea behind the following definition is to “match” the transfer function and the divergence via duality.4 Definition 1 (Matching Loss). Let F : Sk Ñ R be a continuously-differentiable, strictly convex function and let s : Rk Ñ Sk be a transfer function such that ŷ = s(â) denotes the predicted probability distribution based on the activations â. Then the loss function LF(â | y) := ∆F(y, s(â)) , is called the matching loss for s, if s = f̌ ˚ = ∇F̌˚. Note that f̌ ˚ is no longer one-to-one since f̌ ˚(â + R 1) = f̌ ˚(â) (see Appendix D for more details). However, w.l.o.g. we can constrain the domain of the function to â P dom(f̌ ˚) X {a1 P Rk | a1 ¨ 1 = 0} to obtain a one-to-one mapping. The matching loss is useful due to the following property. Proposition 1. The matching loss LF(â | y) is convex w.r.t. the activations â P dom(f̌ ˚) X {a1 P Rk | a 1 ¨ 1 = 0}. Proof. Note that F̌˚ is a strictly convex function and the following relation holds between the divergences induced by F and F̌˚ (see proof of Proposition 4 in Appendix D): ∆F ( y, ŷ ) = ∆F̌˚ ( (f̌ ˚)–1(ŷ), (f̌ ˚)–1(y) ) . (4) Thus for any â in the range of (f̌ ˚)–1, ∆F ( y, f̌ ˚(â) ) = ∆F̌˚ ( â, (f̌ ˚)–1(y) ) . The claim now follows from the convexity of ∆F̌˚ w.r.t. its first argument. The original motivating example for the matching loss was the logistic loss [11, 12]. It can be obtained as the matching loss for the softmax function ŷi = [f̌ ˚(â)]i = exp(âi) řk j=1 exp(âj) , which corresponds to the relative entropy (KL) divergence LF(â | y) = ∆F ( y, f̌ ˚(â) ) = k ÿ i=1 yi (log yi – log ŷi) = k ÿ i=1 ( yi log yi – yi âi) ) + log ( k ÿ i=1 exp(âi) ) , induced from the negative entropy function F(y) = řk i=1(yi log yi – yi). We next define a family of convex functions Ft parameterized by a temperature t ě 0. The matching loss LFt (â | y) = ∆Ft ( y, f̌ ˚t (â) ) for the link function f̌ ˚t of F̌ ˚ t is convex in the activations â. However, by letting the temperature t2 of f̌ ˚ t2 be larger than the temperature t1 of Ft1 , we construct bounded non-convex losses with heavy-tailed transfer functions. 3 Tempered Matching Loss We start by introducing a generalization of the relative entropy divergence, denoted by ∆Ft , induced by a strictly convex function Ft : R k + Ñ R with a temperature parameter t ě 0. The convex function Ft is chosen so that its gradient takes the form 5 ft(y) := ∇Ft(y) = logt y. Via simple integration, we obtain that Ft(y) = k ÿ i=1 ( yi logt yi + 1 2–t (1 – y2–ti ) ) . Indeed, Ft is a convex function since ∇ 2Ft(y) = diag(y –t) ľ 0 for any y P Rk+. In fact, Ft is strongly convex, for 0 ď t ď 1: Lemma 1. The function Ft, with 0 ď t ď 1, is B –t–strongly convex over the set {y P Rk+ : }y}2–t ď B} w.r.t. the L2–t-norm. 4Originally in [11, 12], the matching loss was defined as a simple integral over the transfer function s = f –1: LF(â | y) = ş â s–1(y) (s(z) – y)¨d z. Our new duality based definition handles additional linear constraints. 5Here, the logt function is applied elementwise. See Appendix B for a proof. The Bregman divergence induced by Ft is then given by ∆Ft (y, ŷ) = k ÿ i=1 ( yi logt yi – yi logt ŷi – 1 2–t y2–ti + 1 2–t ŷ2–ti ) = k ÿ i=1 ( 1 (1–t)(2–t) y2–ti – 1 1–t yiŷ 1–t i + 1 2–t ŷ2–ti ) . (5) The second form may be recognized as β-divergence [5] with parameter β = 2 – t. The divergence (5) includes many well-known divergences such as squared Euclidean, KL, and Itakura-Saito divergence as special cases. A list of additional special cases is given in Table 3 of Appendix C. The following corollary is the direct consequence of the strong convexity of Ft. Corollary 1. Let max(}y}2–t, }ŷ}2–t) ď B for 0 ď t < 1. Then 1 2Bt }y – ŷ}22–t ď ∆Ft (y, ŷ) ď Bt 2 (1 – t)2 }y1–t – ŷ1–t}22–t 1–t . See Appendix B for a proof. Thus for 0 ď t < 1, ∆Ft (y, ŷ) is upper-bounded by 2 B2–t (1–t)2 . Note that boundedness on the simplex also implies boundedness in the L2–t-ball. Thus, Corollary 1 immediately implies the boundedness of the divergence ∆Ft (y, ŷ) with 0 ď t < 1 over the simplex. Alternate parameterizations of the family {Ft} of convex functions and their corresponding Bregman divergences are discussed in Appendix C. 3.1 Tempered softmax function Now, let us consider the convex function Ft(y) when its domain is restricted to the probability simplex Sk. We denote the constrained dual of Ft(y) by F̌ ˚ t (a), F̌˚t (a) = sup y1PSk ( y 1 ¨ a – Ft(y 1) ) = sup y1PRk+ inf λtPR ( y 1 ¨ a – Ft(y 1) + λt ( 1 – k ÿ i=1 y1i )) . (6) Following our discussion in Section 2.1 and using (3), the transfer function induced by F̌˚t is 6 y = expt ( a – λt(a) 1 ) , with λt(a) s.t. k ÿ i=1 expt ( ai – λt(a) ) = 1. (7) 3.2 Matching loss of tempered softmax Finally, we derive the matching loss function LFt . Plugging in (7) into (5), we have Lt(â | y) = ∆Ft ( y, expt(â – λt(â) 1) ) . Recall that by Proposition 1, this loss is convex in activations â P dom(f̌ ˚) X {a1 P Rk | a1 ¨ 1 = 0}. In general, λt(a) does not have a closed form solution. However, it can be easily approximated via an iterative method, e.g., a binary search. An alternative (fixed-point) algorithm for computing λt(a) for t > 1 is given in Algorithm 1 of Appendix A. 4 Robust Bi-Tempered Logistic Loss A more interesting class of loss functions can be obtained by introducing a “mismatch” between the temperature of the divergence function (5) and the temperature of the probability assignment function, i.e. the tempered softmax (7). That is, we consider loss functions of the following type: @ 0ď t1 < 1< t2 : L t2 t1 (â | y) := ∆Ft1 ( y, expt2 (â – λt2 (â)1) ) ,with λt(â) s.t. k ÿ i=1 expt ( ai – λt(a) ) =1. (8) We call this the Bi-Tempered Logistic Loss. As illustrated in our two-dimensional example in Section 1, both properties are crucial for handling noisy examples. The derivative of the bi-tempered loss is given in Appendix E. In the following, we discuss the properties of this loss for classification. 6Note that due to the simplex constraint, the link function y = f̌ ˚t (a) = ∇F̌ ˚ t (a) = expt ( a – λt(a) 1 ) is different from f –1t (a) = f ˚ t (a) = ∇F ˚ t (a) = expt(a), i.e., the gradient of the unconstrained dual. 4.1 Properness and Monte-Carlo sampling Let PUK(x, y) denote the (unknown) joint probability distribution of the observed variable x P R m and the class label y P [k]. The goal of discriminative learning is to approximate the posterior distribution of the labels PUK(y | x) via a parametric model P(y | x;Θ) parameterized by Θ. Thus the model fitting can be expressed as minimizing the following expected loss between the data and the model’s label probabilities EPUK(x) [ ∆ ( PUK(y | x), P(y | x;Θ) ) ] , (9) where ∆ ( PUK(y | x), P(y | x;Θ) ) is any divergence measure between PUK(y | x) and P(y | x;Θ). We use ∆ := ∆Ft1 as the divergence and P(i | x;Θ) := P(y = i | x;Θ) = expt2 (âi – λt2 (â)), where â is the activation vector of the last layer given input x and Θ is the set of all weights of the network. Ignoring the constant terms w.r.t. Θ, our loss (9) becomes EPUK(x) [ ÿ i ( – PUK(i | x) logt P(i | x;Θ) + 1 2 – t P(i | x;Θ)2–t ) ] (10a) = –EPUK(x,y) [ logt P(y | x;Θ) ] + EPUK(x) [ 1 2 – t ÿ i P(i | x;Θ)2–t ) ] (10b) « 1 N ÿ n ( – logt P(yn | xn;Θ) + 1 2 – t ÿ i P(i | xn;Θ) 2–t ) , (10c) where from (10b) to (10c), we perform a Monte-Carlo approximation of the expectation w.r.t. PUK(x, y) using samples {(xn, yn)} N n=1. Thus, (10c) is an unbiased approximate of the expected loss (9), thus is a proper loss [20]. Following the same approximation steps for the Tsallis divergence used in [2], we have EPUK(x) [ – ÿ i PUK(i | x) logt P(i | x;Θ) PUK(i | x) looooooooooooooooomooooooooooooooooon ∆Tsallist ( PUK(y|x), P(y|x;Θ) ) ] « – 1 N ÿ n logt P(yn | xn;Θ) PUK(yn | xn) , which, due to the fact that logt a b ‰ logt a – logt b in general, requires access to the (unknown) conditional distribution PUK(y | x). In this case the approximation – 1 N ř n logt P(yn | xn;Θ) proposed in [2] by setting PUK(yn | xn) to 1 is not an unbiased estimator of (9) and therefore, not proper. 4.2 Bayes-risk consistency Another important property of a multiclass loss is the Bayes-risk consistency [19]. Bayes-risk consistency of the two-temperature logistic loss based on the Tsallis divergence was shown in [2]. As expected, the tempered Bregman loss (8) is also Bayes-risk consistent even in the non-convex case. Proposition 2. The multiclass bi-tempered logistic loss Lt2t1 (â | y) is Bayes-risk consistent. 5 Experiments We demonstrate the practical utility of the bi-tempered logistic loss function on a wide variety of image classification tasks. For moderate-size experiments, we use MNIST dataset of handwritten digits [14] and CIFAR-100, which contains real-world images from 100 different classes [13]. We use ImageNet-2012 [6] for large scale image classification, having 1000 classes. All experiments are carried out using the TensorFlow [1] framework. We use P100 GPU’s for small-scale experiments and Cloud TPU-v2 for larger scale ImageNet-2012 experiments. An implementation of the bi-tempered logistic loss is available online at: https://github.com/google/bi-tempered-loss. 5.1 Corrupted labels experiments For our moderate size datasets, i.e. MNIST and CIFAR-100, we introduce noise by artificially corrupting a fraction of the labels and producing a new set of labels for each noise level. For all experiments, we compare our bi-tempered loss function against the logistic loss. For MNIST, we use a CNN with two convolutional layers of size 32 and 64 with a mask size of 5, followed by two fully-connected layers of size 1024 and 10. We apply max-pooling after each convolutional layer with a window size equal to 2 and use dropout during training with keep probability equal to 0.75. We use the AdaDelta optimizer [21] with 500 epochs and batch size of 128 for training. For CIFAR-100, we use a Resnet-56 [10] model without batch norm from [9] with SGD + momentum optimizer trained for 50k steps with batch size of 128 and use the standard learning rate stair case decay schedule. For both experiments, we report the test accuracy of the checkpoint which yields the highest accuracy on an identically label-noise corrupted validation set. We search over a set of learning rates for each experiment. For both experiments, we exhaustively search over a number of temperatures within the range [0.5, 1) and (1.0, 4.0] for t1 and t2, respectively. The results are presented in Table 1 where we report the top-1 accuracy on a clean test set. As can be seen, the bi-tempered loss outperforms the logistic loss for all noise levels (including the noise-free case for CIFAR-100). Using our bi-tempered loss function the model is able to continue to perform well even for high levels of label noise whereas the accuracy of the logistic loss drops immediately with a much smaller level of noise. 5.2 Large scale experiments We train state-of-the-art Resnet-18 and Resnet-50 models on the ImageNet-2012 dataset. Note that the ImageNet-2012 dataset is inherently noisy due to some amount of mislabeling. We train on a 4x4 CloudTPU-v2 device with a batch size of 4096. All experiments were trained for 180 epochs, and use the SGD + momentum optimizer with staircase learning rate decay schedule. The results are presented in Table 2. For both architectures we see a significant gain in the top-1 accuracy using the robust bi-tempered loss. 6 Conclusion and Future Work Neural networks on large standard datasets have been optimized along with a large variety of variables such as architecture, transfer function, choice of optimizer, and label smoothing to name just a few. We proposed a new variant by training the network with tunable loss functions. We do this by first developing convex loss functions based on temperature dependent logarithm and exponential functions. When both temperatures are the same, then a construction based on the notion of “matching loss” leads to loss functions that are convex in the last layer. However by letting the temperature of the new tempered softmax function be larger than the temperature of the tempered log function used in the divergence, we construct tunable losses that are non-convex in the last layer. Our construction remedies two issues simultaneously: we construct bounded tempered loss functions that can handle large-margin outliers and introduce heavy-tailedness in our new tempered softmax function that seems to handle small-margin mislabeled examples. At this point, we simply took a number of benchmark datasets and networks for these datasets that have been heavily optimized for the logistic loss paired with vanilla softmax and simply replaced the loss in the last layer by our new construction. By simply trying a number of temperature pairs, we already achieved significant improvements. We believe that with a systematic “joint optimization” of all commonly tried variables, significant further improvements can be achieved. This is of course a more long-term goal. We also plan to explore the idea of annealing the temperature parameters over the training process. Acknowledgement We would like to thank Jerome Rony for pointing out that early stopping improves the accuracy of the logistic loss on the noisy MNIST experiment. This research was partially supported by the NSF grant IIS-1546459.
1. What is the novel contribution of the paper in terms of loss functions for DNN classification? 2. How does the proposed biparametric logistic loss improve upon the usual logistic loss? 3. Can you provide more context on the reference to "generalized thermostatistics"? 4. How does the paper relate to previous works in information geometry and its applications? 5. Can you clarify the statement regarding Tsallis-based divergences not resulting in proper loss functions? 6. Are there any minor errors or typos in the review that should be addressed?
Review
Review The paper is well-motivated and introduce a novel tunable class of losses for (DNN) classification by replacing the usual logistic loss. The experiments demonstrate the gain obtained by using this biparametric logistic loss and improve AISTATS'19 - mention general deformed logarithm and exponential (integral of a monotonous function) and then introduce its specialization of Eq. 1 Cite the book of Jan Naudts : "Generalised Thermostatistics", Springer. - explain that parameters are called temperature because of their use in thermostatistics [14]. I think the paper (and notably the abstract) will gain in readibility by not mentioning "temperature" of generalized thermostatistics. - cite book of Amari 2016 "Information Geometry and Its Applications", mention conformal flattening of Tsallis relative entropy, and escort distributions - need to state whether domain is open (convex) or not, and whether the Bregman generator of Legendre-type or not - Should better explain "However, the Tsallis based divergences do not result in proper loss functions" (AISTATS 19 paper) Minor typos: - Kullback Leibler divergence -> Kullback-Leibler divergence - typo 106 -> Kullback-Leibler (KL) divergence
NIPS
Title Polynomial-Time Optimal Equilibria with a Mediator in Extensive-Form Games Abstract For common notions of correlated equilibrium in extensive-form games, computing an optimal (e.g., welfare-maximizing) equilibrium is NP-hard. Other equilibrium notions—communication [11] and certification [12] equilibria—augment the game with a mediator that has the power to both send and receive messages to and from the players—and, in particular, to remember the messages. In this paper, we investigate both notions in extensive-form games from a computational lens. We show that optimal equilibria in both notions can be computed in polynomial time, the latter under a natural additional assumption known in the literature. Our proof works by constructing a mediator-augmented game of polynomial size that explicitly represents the mediator’s decisions and actions. Our framework allows us to define an entire family of equilibria by varying the mediator’s information partition, the players’ ability to lie, and the players’ ability to deviate. From this perspective, we show that other notions of equilibrium, such as extensive-form correlated equilibrium, correspond to the mediator having imperfect recall. This shows that, at least among all these equilibrium notions, the hardness of computation is driven by the mediator’s imperfect recall. As special cases of our general construction, we recover 1) the polynomial-time algorithm of Conitzer and Sandholm [8] for automated mechanism design in Bayes-Nash equilibria and 2) the correlation DAG algorithm of Zhang et al. [31] for optimal correlation. Our algorithm is especially scalable when the equilibrium notion is what we define as the full-certification equilibrium, where players cannot lie about their information but they can be silent. We back up our theoretical claims with experiments on a suite of standard benchmark games. 1 Introduction Various equilibrium notions in general-sum extensive-form games are used to describe situations where the players have access to a trusted third-party mediator, who can communicate with the players. Depending on the power of the mediator and the form of communication, these notions include the normal-form [1] and extensive-form correlated equilibrium (NFCE and EFCE) [29], the normal-form [25] and extensive-form [10] coarse-correlated equilibrium (NFCCE and EFCCE), the communication equilibrium [11], and the certification equilibrium [12]. Several of these notions, in particular the EFCE and EFCCE, were defined for mainly computational reasons: the EFCE as a computationally-reasonable relaxation to NFCE, and the EFCCE as a computationally-reasonable relaxation of EFCE. When the goal is to compute a single correlated equilibrium, these relaxations are helpful: there are polynomial-time algorithms for computing an EFCE [16]. However, from the perspective of computing optimal equilibria—that is, equilibria that 36th Conference on Neural Information Processing Systems (NeurIPS 2022). maximize the expected value of a given function, such as the social welfare—even these relaxations fall short: for all of the correlation notions above, computing an optimal equilibrium of an extensive-form game is NP-hard [29, 10]. On the other hand, notions of equilibrium involving communication in games have arisen. These differ from the notions of correlation in that the mediator can receive and remember information from the players, and therefore pass information between players as necessary to back up their suggestions. Certification equilibria [12] further strengthen communication equilibria by allowing players to prove certain information to the mediator. To our knowledge, the computational complexity of optimal communication or certification equilibria has never been studied. We do so in this paper. The main technical result of our paper is a polynomial-time algorithm for computing optimal communication and certification equilibria (the latter under a certain natural condition about what messages the players can send). This stands in stark contrast to the notions of correlation discussed above. To prove our main result, we define a general class of mediator-augmented games, each having polynomial size, that is sufficient to describe all of the above notions of equilibrium except the NFCE1. We also build on this main result in several ways. 1. We define the full-certification equilibrium, which is the special case in which players cannot lie to the mediator (but can opt out of revealing their information). In this case, the algorithm is a linear program whose size is almost linear in the size of the original game. As such, this special case scales extremely well compared to all of the other notions. 2. We formalize notions for incorporating payments in the language of our augmented game. By using payments, mediators can incentivize players to play differently than they otherwise would, possibly to the benefit of the mediator’s utility function. 3. We define an entire family of equilibria using our augmented game, that includes as special cases the communication equilibrium, certification equilibrium, NFCCE, EFCCE, and EFCE. From this perspective, we show that other notions of equilibrium, such as extensiveform correlated equilibrium, correspond to the mediator having imperfect recall. This shows that, at least among all these equilibrium notions, the hardness of computation is driven by the mediator’s imperfect recall. We argue that, for this reason, many stated practical applications of correlated equilibria should actually be using communication or certification equilibria instead, which are both easier to compute (in theory, at least) and better at modelling the decision-making process of a rational mediator. 4. We empirically verify the above claims via experiments on a standard set of game instances. Applications and related work. Correlated and communication equilibria have various applications that have been well-documented. Here, we discuss just a few of them, as motivation for our paper. For further discussion of related work, especially relating to automated dynamic mechanism design and persuasion, see Appendix F. Bargaining, negotiation, and conflict resolution [4, 9]. Two parties with asymmetric information wish to arrive at an agreement, say, the price of an item. A mediator, such as a central third-party marketplace, does not know the players’ information but can communicate with the players. Crowdsourcing and ridesharing [13, 22, 31]. A group of players each has individual goals (e.g., to make money by serving customers at specific locations). The players are coordinated by a central party (e.g., a ridesharing company) that has more information than any one of the players, but the players are free to ignore recommendations if they so choose. Persuasion in games [17, 3, 23, 14, 30]. The mediator (in that literature, usually “sender”) has more information than the players (“receivers”), and wishes to tell information to the receivers so as to persuade them to act in a certain way. Automated mechanism design [6, 8, 33, 35, 26, 34, 18, 19]. Players have private information unknown to the mediator. The mediator wishes to commit to a strategy—that is, set a mechanism— such that players are incentivized to honestly reveal their information. In fact, in Appendix E we will see that we recover the polynomial-time Bayes-Nash randomized mechanism design algorithm of [6, 8] as a special case of our main result. 1We do not consider the NFCE, because it breaks our paradigm, which enforces that the mediator’s recommendation be a single action. In NFCE, the whole strategy needs to be revealed upfront. It is an open question whether it is possible to even find one NFCE in polynomial time, not to mention an optimal one. Some of the above examples are often used to motivate correlated equilibria. However, when the mediator is a rational agent with the ability to remember information that it is told and pass the information between players as necessary, we will argue that communication or certification equilibrium should be the notion of choice, for both conceptual and computational reasons. 2 Preliminaries In this section, we discuss background on correlation in extensive-form games. Extensive-form games. An extensive-form game Γ with n players consists of the following. 1. A directed tree of nodes or histories H, whose root is denoted ∅. The depth of the tree will be denoted T . The edges out of nodes are labeled with actions, and the set of such actions will be denoted Ah. Given a node h ∈ H and action a at h, the child reached by following action a at node h is denoted ha. The set of terminal (leaf) nodes inH is denoted Z . Terminal nodes will always be denoted z throughout the paper. 2. A partitionH \ Z = HC tH1 t · · · t . . .Hn of nodes, whereHi is the set of all nodes at which player i plays and playerHC is the set of chance nodes. 3. For each player i, a partition Ii of player i’s decision nodes, Hi, into information sets or infosets. Every node in a given information set I must have the same set of actions, denoted AI . We will call the partition I = I1 t · · · t In the players’ information partition. 4. For each player i, a utility vector ui ∈ [0, 1]Z , where ui[z] denotes the utility achieved by player i at terminal node z. 5. For each chance node h ∈ HC, a probability distribution p(·|h) over the children of h. The sequence σi(h) is the list of infosets reached by player i, and actions taken by the player i at those infosets, on the ∅ → h path, not including the infoset at h itself (if any). We will assume that each player has perfect recall—that is, for each infoset I , the sequence of the player acting at I should be the same for each node in I . We will denote this sequence σ(I). In perfect-recall games, nonempty sequences will be identified by the last infoset-action pair Ia in them. We also will assume that games are timeable and fixed-turn-order, that is, information sets do not span multiple levels of the tree, and all nodes in the same layer of the tree belong to the same player2. We will use the following notation. The relation denotes the natural precedence order induced by the tree H: we write h h′ means that h is an ancestor of h′ (or h = h′), and for sets S, S′, we say S S′ if there are some h ∈ S, h′ ∈ S′ such that h h′. The binary operation ∧ denotes the lowest common ancestor: h ∧ h′ is the lowest node u such that u h, h′. For sequences, σ(h) = (σ1(h), . . . , σn(h)) denotes the joint sequence of all players at node h. N(σ) denotes the set of possible next infosets following sequence σ, that is, N(σ) = {I : σ(I) = σ}. The set Σi denotes the set of sequences of player i, and Σ denotes the set of all sequences across all players (i.e., Σ = tiΣi). A pure strategy for a player i is a selection of one action for each information set I ∈ Ii. A pure profile is a tuple of pure strategies. A correlated profile is a distribution over pure profiles. We will generally work with strategies in realization form (see e.g., Koller et al. [20]). Given a pure strategy x, we say that x plays to z ∈ Z if x plays every action on the ∅ → z path. We will call the vector x ∈ {0, 1}Z the realization form of x. The realization form of a mixed strategy is the appropriate convex combination. The set of mixed strategies forms a convex subset of RZ that, so long as the player has perfect recall, can be expressed using linearly many constraints and variables. We will occasionally need to discuss changing information partitions of Γ. If J = J1 t · · · t Jn is another valid information partition, we will use ΓJ to denote the game Γ with its information partition replaced byJ . We will also occasionally need to talk about multiple games simultaneously; where this is the case, we will mark attributes of the game the same as the game itself. For example, Ĥ is the node set of game Γ̂. 2Timeability is not without loss of generality, but any game for which the precedence order is a partial order over infosets can be converted to a timeable game by adding dummy nodes. Given timeability, fixed-turnorder is without loss of generality, also by adding dummy nodes Communication and certification equilibria. Here, we review definitions related to communication equilibria, following Forges [11] and later related papers. Definition 2.1. Let S be a space of possible messages. A pure mediator strategy is a map d : S≤T → S, where S≤T denotes the set of sequences in S of length at most T . A randomized mediator strategy (hereafter simply mediator strategy) is a distribution over pure mediator strategies. We will assume that the space of possible messages is large, but not exponentially so. In particular, we will assume that {⊥} ∪ I ∪ ⋃ hAh ⊆ S (i.e., messages can at least be nothing, information, or actions)3 and that |S| ≤ poly(|H|). The latter assumption is mostly for cleanliness in stating results: we will give algorithms that need S as an input that we wish to run in time poly(|H|). A mediator strategy augments a game as follows. If the strategy is randomized, it first samples a pure strategy d, which is hidden from the players. At each timestep t, a player reaches a history h at which she must act, and observes the infoset I 3 h. She sends a message st ∈ S to the mediator. The mediator then sends a response d(s1, . . . , st), which depends on the message st as well as the messages sent by all other players prior to timestep t. Then, the player chooses her action a ∈ Ah. We will call the sequence of messages sent and received between the mediator and player i, the transcript with player i. A communication equilibrium4 is a Nash equilibrium of the game Γ augmented with a mediator strategy. The mediator is allowed to perform arbitrary communication with the players. In particular, the mediator is allowed to pass information from one player to another. Further, the players are free to send whatever messages they wish to the mediator, including false or empty messages. These two factors distinguish communication equilibria from notions of correlated equilibria. In Section 3.4 we will discuss this comparison in greater detail. A useful property in the literature on communication equilibria is the revelation principle (e.g., [11]). Informally, the revelation principle states that any outcome achievable by an arbitrary strategy profile can also be achieved by a direct strategy profile, in which the players tell the mediator all their information and are subsequently directly told by the mediator which action to play. In order to be a communication equilibrium, the players still must not have any incentive to deviate from the protocol. That is, the equilibrium must be robust to all messages that a player may attempt to send to the mediator, even if in equilibrium the player always sends the honest message. Forges and Koessler [12] further introduced a form of equilibrium for Bayesian games which they called certification equilibria. In certification equilibria, the messages that a player may legally send are dependent on their information; as such, some messages that a player can send are verifiable. At each information set I ∈ I, let SI ⊆ S denote the set of messages that the player at infoset I may send to the mediator. We will always assume that I ∈ SI and ⊥ ∈ SI for all I . That is, all players always have the options of revealing their true information or revealing nothing. 3 Extensive-form S-certification equilibria The central notion of interest in this paper is a generalization of the notion of certification equilibria [12] to extensive-form games. Definition 3.1. Given an extensive-form game Γ and a family of valid message sets S = {SI : I ∈ I}, an S-certification equilibrium is a Nash equilibrium of the game augmented by a randomized mediator, in which each player at each information set I is restricted to sending a message s ∈ SI . The existence of S-certification equilibria follows from the existence of Nash equilibria, which are the special case where the mediator does nothing. We will need one extra condition on the message sets, which is known as the nested range condition (NRC) [15]: if I ∈ SI′ , then SI ⊆ SI′ . That is, if a player with information I ′ can lie by pretending to have information I , then that player can also emulate any other message she would have been 3A priori, although the messages are given these names, they carry no semantic meaning. The revelation principle is used to assign natural meaning to the messages. 4Previous models of communication in games [11, 12] usually worked with a model in which players send messages, receive messages, and play moves simultaneously, rather than in sequence as in the extensive-game model that we use. The simultaneous-move model is easy to recreate in extensive form: by adding further “dummy nodes” at which players learn information but only have one legal action, we can effectively re-order when players ought to communicate their information to the mediator. able to send at I . Equivalently, the honest message I should be the most certifiable message that a player can send at infoset I . Our main result is the following. Theorem 3.2. Let uM ∈ RZ be an arbitrary utility vector for the mediator. Then there is a polynomial-time algorithm that, given a game Γ and a message set family S satisfying the nested range condition, computes an optimal S-certification equilibrium, that is, one that maximizes Ez uM[z] where the expectation is over playouts of the game under equilibrium. In particular, by setting SI = S for all I , Theorem 3.2 implies that optimal communication equilibria can be computed in polynomial time. The rest of the paper is organized as follows. First, we will prove our main theorem. Along the way, we will demonstrate a form of revelation principle for S-certification equilibria. We will then discuss comparisons to other known forms of equilibrium, including the extensive-form correlated equilibrium [29], and several other natural extensions of our model. Finally, we will show experimental results that compare the computational efficiency and social welfare of various notions of equilibrium on some experimental game instances. 3.1 Proof of Theorem 3.2: The single-deviator mediator-augmented game In this section, we construct a game Γ̂, with n + 1 players, that describes the game Γ where the mediator has been added as an explicit player. This game has similar structure to the one used by Forges [11, Corollary 2], but, critically, has size polynomial in |H|. This is due to two critical differences. First, the players are assumed to either send ⊥, or send messages that mediator cannot immediately prove to be off-equilibrium. In particular, if the player’s last message was I and the mediator recommended action a at I , the player must send a message I ′ with σ(I ′) = Ia. If this is impossible, the player must send ⊥. Therefore, in particular, we will assume that SI consists of only ⊥ and information sets I ′ at the same level as I . Second, only one player is allowed to deviate. Therefore, the strategy of the mediator is not defined in cases where two or more players deviate. We now formalize Γ̂. Nodes in Γ̂ will be identified by tuples (h, τ , r) where h ∈ H is a history in Γ, τ = (τ1, . . . , τn) is the collection of transcripts with all players, and r ∈ {REV, REC, ACT} is a stage marker that denotes whether the current state is one in which a player should be revealing information (REV), the mediator should be recommending a move (REC), or the player should be selecting an action (ACT). The progression of Γ̂ is then defined as follows. We will use the notation τ [i·s] to denote appending message s to τi. • The root node of Γ̂ is (∅, (∅, . . . ,∅), REV). • Nodes (z, τ , REV) for z ∈ Z are also terminal in Γ. The mediator gets utility uM[z], where u is the mediator’s utility function as in Theorem 3.2. All other players i get utility ui[z]. • Nodes (h, τ , REV) for non-terminal h are decision nodes for the player i who acts at h. 1. If i is chance, there is one valid transition, to (h, τ , ACT). 2. If some other player j 6= i has already deviated (i.e., σj(h) 6= τj)), there is one valid transition, to (h, τ [i·I], REC) where I 3 h. 3. If player i has deviated or no one has deviated, then player i observes the infoset I 3 h, and selects a legal message I ′ ∈ SI ∩ ({⊥} ∪N(τi)) to send to the mediator5. Transition to (h, τ [i·I ′], REV). • At (h, τ , REC) where h ∈ Hi, the mediator observes the transcript τi and makes a recommendation a. If τi contains any ⊥ messages, then a = ⊥. Otherwise, a is a legal action a ∈ AI , where I is the most recent message in τi. Transition to (h, τ [i·a], ACT). • Nodes (h, τ , ACT) for non-terminal h are decision nodes for the player i who acts at h. 1. If i is chance, then chance samples a random action a ∼ p(·|h). Transition to (ha, τ , REV). 2. If some other player j 6= i has already deviated, there is one valid transition, to (ha, τ , REC), where a is the action sent by the mediator. 5If τi contains any ⊥ messages, then we take N(τi) = ∅ 3. If player i has deviated or no one has deviated, then player i observes the transcript τi, and selects an action a′ ∈ Ah. Transition to (ha′, τ , REV). The action a′ need not be the recommended action. Since at most one player can ever deviate by construction, and the length of the transcripts are fixed because turn order is common knowledge, the transcripts τ can be identified with sequences σi of the deviated player, if any. We will make this identification: we will use the shorthand hσi to denote the history (h, (σ−i(h), σi), REV), and h⊥ for (h,σ(h), REV) (i.e., no one has deviated yet). Therefore, in particular, this game has at most O(|H||Σ|) histories. For each non-mediator player, there is a well-defined direct strategy x̂∗i for that player: always report her true information I 3 h, and always play the action recommended by the mediator. The goal of the mediator is to find a strategy x̂M for itself that maximizes its expected utility, subject to the constraint that each player’s direct strategy is a best response—that is, find x̂M such that (x̂M, x̂ ∗ 1, . . . , x̂ ∗ n) is a (strong) Stackelberg equilibrium of Γ̂. We claim that finding a mediator strategy x̂M that is a strong Stackelberg equilibrium in Γ̂ is equivalent to finding an optimal S-certification equilibrium in Γ. We prove this in two parts. First, we prove a version of the revelation principle for S-certification equilibria. Definition 3.3. An S-certification equilibrium is direct if it satisfies the following two properties. 1. (Mediator directness) If the transcript τi of a player i is exactly some sequence of player i, and player i sends an infoset I with σ(I) = τi, then the mediator replies with an action a ∈ AI . Otherwise6, the mediator replies ⊥. 2. (Player directness) In equilibrium, players always send their true information I , and, upon receiving an action a ∈ AI , always play that action. Proposition 3.4 (Revelation principle for S-certification equilibria under NRC). Assume that S satisfies the nested range condition. For any S-certification equilibrium, there is a realizationequivalent direct equilibrium. Omitted proofs can be found in the appendix. Since direct mediator strategies are exactly the mediator strategies in Γ̂, and the player strategies are only limited versions of what they are allowed to do in S-certification equilibrium, this implies that, for any S-certification equilibrium, there is a mediator strategy x̂M in Γ̂ such that (x̂M, x̂∗1, . . . , x̂ ∗ n) is a Stackelberg equilibrium. We will also need the converse of this statement. Proposition 3.5. Let x̂M be a strategy for the mediator in Γ̂ such that, in the strategy profile (x̂M, x̂ ∗ 1, . . . , x̂ ∗ n), every x̂ ∗ i for i 6= M is a best response. Then there is an direct S-certification equilibrium that is realization-equivalent to (x̂M, x̂∗1, . . . , x̂ ∗ n). Therefore, we have shown that the mediator strategies x̂M in Γ̂ for which (x̂M, x̂∗1, . . . , x̂ ∗ n) is a Stackelberg equilibrium in Γ̂ correspond exactly to optimal S-certification equilibria of Γ. Such a Stackelberg equilibrium can be found by solving the following program: max x̂M∈X̂M ∑ ẑ∈Ẑ x̂M[ẑ]ûM[ẑ]p̂(ẑ) ∏ i∈[n] x̂∗i [ẑ] s.t. max x̂′j∈X̂j ∑ ẑ∈Ẑ x̂M[ẑ]ûi[ẑ]p̂(ẑ) ( x̂′j [ẑ]− x̂∗j [ẑ] )∏ i 6=j x̂∗i [ẑ] ≤ 0 ∀j ∈ [n] (1) where X̂i is the sequence-form strategy space [20] of player i in Γ̂. The only variables in the program are x̂i for each player i and the mediator. In particular, the direct strategies x̂∗i are constants. Therefore, the objective is a linear function, and the inner maximization constraints are bilinear in x̂M and x̂j . Therefore, this program can be converted to a linear program by dualizing the inner optimizations. For more details on this conversion, see Appendix B. The result is a linear program of size O(n|Ĥ| ) = O(n|H||Σ|). We have thus proved Theorem 3.2. 6This condition is necessary because, if the mediator does not know what infoset the player is in, the mediator may not be able to send the player a valid action, because action sets may differ by infoset. 3.2 Extensions and special cases In this section, we describe several extensions and interesting special cases of our main result. Full-certification equilibria. One particular special case of S-certification equilibria which is particularly useful. We define a full-certification equilibrium as an S-certification equilibrium where SI = {⊥, I}. Intuitively, this means that players cannot lie to the mediator, but they may withhold information. We will call such an equilibrium full-certification. Removing valid messages from the players only reduces their ability to deviate and thus increases the space of possible equilibrium strategies. As such, the full-certification equilibria are the largest class of S-certification equilibria. For full-certification equilibria, the size of game Γ̂ reduces dramatically. Indeed, in all histories hIa of Γ̂, we must have I h. Therefore, we have |Ĥ| ≤ |H|BD where B is the maximum branching factor and D is the depth of the game tree, i.e., the size of Γ̂ goes from essentially quadratic to essentially quasilinear in |H|. The mediator’s decision points in Γ̂ for a full-certification equilibrium are the trigger histories used by Zhang et al. [31] in their analysis of various notions of correlated equilibria. Later, we will draw further connections between full certification and correlation. Changing the mediator’s information. In certain cases, the mediator, in addition to messages that it is sent by the players, also has its own observations about the world. These are trivial to incorporate into our model: simply change the information partition of the mediator in Γ̂ as needed. Alternatively, one can imagine adding a “player”, with no rewards (hence no incentive to deviate), whose sole purpose is to observe information and pass it to the mediator. For purposes of keeping the game small, it is easier to adopt the former method. To this end, consider any refinement partition M of the mediator infosets in Γ̂, and consider the game Γ̂M created by replacing the mediator’s information partition in Γ̂ withM. Then we make the following definition. Definition 3.6. An (S,M)-certification equilibrium of Γ is a mediator strategy x̂M in Γ̂M such that, in the strategy profile (x̂M, x̂∗1, . . . , x̂ ∗ n), every x ∗ i for i 6= M is a best response. (S,M)-certification equilibria may not exist: indeed, ifM is coarser than the mediator’s original information partition in Γ̂, then the mediator may not have enough information to provide good recommendations under the restrictions of Γ̂. This can be remedied by allowing payments (see Appendix E), or by making the assumption that the mediator at least knows the transcript of the player to whom she is making any nontrivial recommendation: Definition 3.7. A mediator partitionM is direct if, at every mediator decision point (h, τ , REC), so long as |Ah| > 1, the mediator knows the transcript of the player acting at h.M is strongly direct if the mediator also observes the transcript when |Ah| = 1. The condition |Ah| > 1 in the definition allows the mediator to possibly not observe the full information of a player if she does not need to make a nontrivial recommendation to that player. In particular, this allows players to sometimes have information that they only partially reveal to the mediator, so long as the player does not immediately need to act on such information. Coarseness. In literature on correlation, coarseness refers to the restriction that a player must obey any recommendation that she receives (but may choose to deviate by not requesting a recommendation and instead playing any other action). Normal-form coarseness further adds the restriction that players can only choose to deviate at the start of the game—the mediator essentially takes over and plays the game on behalf of non-deviating players. These notions can easily be expressed in terms of our augmented game, therefore also allowing us to express coarse versions of our equilibrium notions as augmented games. 3.3 The gap between polynomial and not polynomial If players cannot send messages to the mediator at all, and the mediator has no other way of gaining any information, we recover the notion of autonomous correlated equilibrium (ACE). It is NP-hard to compute optimal ACE, even in Bayesian games (see e.g., von Stengel and Forges [29]). WhenM is direct and perfect recall, computing an optimal direct (S,M)-certification equilibrium can be done in polynomial time using our framework. When S obeys NRC and M satisfies a stronger condition7, the proof of the revelation principle (Propositions 3.4 and 3.5) works, and the resulting equilibrium is guaranteed to be optimal over all possible equilibria including those that may not be direct. If NRC does not hold, one can still solve the program (1), and the solution is still guaranteed to be an optimal direct equilibrium by Proposition 3.5. However, it is not guaranteed to be optimal over all possible communication structures. Indeed, Green and Laffont [15, Theorem 1] give an instance in which, without NRC, there can be an outcome distribution that is not implementable by a direct mediator. Our program cannot find such an outcome distribution. The counterexample does not preclude the possibility of efficient algorithms for finding optimal certification equilibria in more general cases, but does give intuition for why NRC is crucial to our construction. We could also consider changing the mediator’s information partition so that the mediator does not have perfect recall. This transformation allows us to recover notions of correlation in games. Indeed, if we start from the full-certification equilibrium and only allow the mediator to remember the transcript with the player she is currently talking to, we recover EFCE. Adding coarseness similarly recovers EFCCE and NFCCE. In this setting, the inability to represent the strategy space of an imperfect-recall player may result in the loss of efficient algorithms. 3.4 A family of equilibria By varying 1) what the mediator observes, 2) whether the mediator has perfect recall, 3) whether the players can lie or only withhold information, and 4) when and how players can deviate from the mediator’s recommended actions, we can use our framework to define a family consisting of 16 conceptually different equilibrium notions. More can be generated by considering other variations in this design space, but we focus on the extreme cases in the table. Some of these were already defined in the literature; the remaining names are ours. The result is Table 1. An inclusion diagram for these notions can be found in Appendix G. In the table, ex ante means that players have only a binary choice between deviating (in which case they can play whatever they want) and playing (in which case they must always be direct and obey recommendations). With ex ante deviations, it does not matter whether lying is allowed because we can never get to that stage: either the player deviates immediately and never communicates with the mediator, or the player is direct. If the mediator only remembers the current active player’s information, and players cannot lie, withholding and coarsely deviating are the same. Mediator information advantage means that the mediator always learns the infoset of the current active player, and therefore requires no messages from the players. This is equivalent to forcing players to truthfully report information. A mediator with information advantage may still not have perfect information—for example, it will not know whether a player (or nature) has played an action until some other player observes the action. In this setting, the mediator may also have extra private information (known to none of the players), leading to the setting of Bayesian persuasion [17]. In extensive-form games, there are two different reasonable notions of persuasion: one that stems from extending correlated equilibria, and one that stems from extending communication equilibria. The distinction is that, in the former, the mediator has imperfect recall. For a more in-depth discussion of Bayesian persuasion, see Appendix F. Our framework allows optimal equilibria for all notions in the table to be computed. For perfectrecall mediators, this is possible in polynomial time via the sequence form; for imperfect-recall mediators, the problem is NP-hard, in general, but the team belief DAG of Zhang et al. [32] can be used to recover fixed-parameter algorithms. For the notions of correlated equilibrium, this method results in basically the same LP as the correlation DAG of Zhang et al. [31]. We do not claim that all of these notions are easy to motivate. For example, correlated equilibria are usually arrived at in the “truth known, imperfect recall” setting; the correlated equilibrium notions where lying is allowed are more difficult to motivate in this respect. Further, even the fixedparameter algorithms of Zhang et al. [31] would fail in this setting, because “public states” can no longer be treated as public due to the possibility of lying players. We leave to future research the problem of finding a motivation for the notions that we do not reference elsewhere in the paper. 7Roughly speaking, this condition is that players should not be able to cause the mediator to gain information apart from their own messages by sending messages. It holds for all notions we discuss in this paper. Formalizing the general case is beyond the scope of this paper. 4 Experiments We ran our algorithm for communication and full-certification equilibria on various two-player games, and compared the results to those given by notions of optimal correlation in games. The games used in the experiments are given in Appendix D. All experiments were allocated four CPU cores and 64 GB of RAM. Linear programs were solved with Gurobi 9.5. When payments are used, the allowable payment range is [0,M ] where M is the reward range of the game. Experimental results can be found in Table 2. In the battleship and sheriff instances, there is not a significant difference in performance between finding full-certification equilibria and finding optimal correlated equilibria in terms of performance—this is because, unlike in the general case, optimal correlated equilibria in two-player games without chance can be found in polynomial time [29] anyway. In the ridesharing instances, computing optimal correlated equilibria is much more computationally intensive because the game contains non-public chance actions. Computing optimal full-certification equilibria is comparably easy, and this difference is clearly seen in the timing results. Finding optimal communication equilibria is much more intensive than finding optimal fullcertification equilibria, owing to the quadratic size of the augmented game for communication equilibria. This often causes communication equilibria to be the hardest of the notions to compute in practice, despite optimal correlation being NP-hard. In Figure 1, we have plotted the payoff spaces of some representative instances. The plots show how the polytopes of communication and full-certification equilibria behave relative to correlated equilibria. In the battleship and sheriff instances, the space of communication equilibrium payoffs is a single point, which implies that the space of NFCE (and hence Nash) equilibrium payoffs is also that single point. Unfortunately, that point is the Pareto-least-optimal point in the space of EFCEs. In the ridesharing instances, communication allows higher payoffs. This is because the mediator is allowed to “leak” information between players. 5 Conclusions and future research We have shown that optimal communication and certification equilibria in extensive-form games can be computed via linear programs of polynomial size, or almost-linear size in the full-certification case. We have used our machinery to derive an entire family of equilibrium concepts which we hope to be of use in the future. Possible future directions include the following. 1. Are there efficient online learning dynamics, in any reasonable sense of that term, that converge to certification or communication equilibrium? 2. Is there a better-than-quadratic-size linear program for communication equilibria? 3. Is it possible to extend our augmented game construction to also cover normal-form corre- lated equilibria while maintaining efficiency? 4. Investigate further the comparison between communication and correlation in games. For example, when and why do communication equilibria achieve higher social welfare than extensive-form correlated equilibria? Acknowledgements This material is based on work supported by the National Science Foundation under grants IIS-1901403 and CCF-1733556, and the ARO under award W911NF2010081.
1. What is the focus of the paper regarding extensive form games and mediator communication? 2. What are the strengths of the proposed approach, particularly in terms of computational efficiency? 3. What are the weaknesses of the paper, especially regarding its notation usage and lack of illustrative examples? 4. Do you have any questions regarding the mediator-augmented game and its conversion to a Stackelberg game? 5. How does the reviewer assess the novelty and suitability of the paper for NeurIPS?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies the complexity of correlated equilibrium in extensive form games with mediator communication, knowns as the communication or certification equilibria. The main result is a polynomial-time algorithm, specifically a linear program formulation, for computing optimal communication and certification equilibria. To prove this result, the paper define a mediator-augmented games, which basically includes the mediator as an additional player. The new game has polynomial size and can describe several equilibrium notions. Strengths And Weaknesses Strengths: -- the study of communication in extensive-form game seems an interesting and well-motivated problem. -- the fact that for most correlation equilibrium notions, the optimal equilibrium is hard whereas this paper identifies polynomial time algorithm for CE with mediator is interesting. Weakness: -- The paper is very notation heavy without much illusion about the intuition of the model. For instance, I was hoping the paper could instantiate their model in one of the listed three applications (i.e., Crowdsourcing and ridesharing, persuasion or automated MD), to familiarize the audience with the setup and the motivation of the mediator. -- the techniques of the work appears somewhat standard. It is not clear what the novelty is. Frankly speaking, I spent more time to understand the notations in Section 2 than to understand the proof of the main theorem, which is a natural conversion of the original game to a Stackelberg game with the mediator as the leader. -- This is purely an equilibrium optimization paper without any aspect of learning. So it does not seem a good fit to NeurIPS. Questions "The infinite family of constraints can be made polynomial-sized by taking a dual of the inner optimization" at the end of Section 3.1 --- would you elaborate on how to achieve this, or is this a known result? Limitations N/A
NIPS
Title Polynomial-Time Optimal Equilibria with a Mediator in Extensive-Form Games Abstract For common notions of correlated equilibrium in extensive-form games, computing an optimal (e.g., welfare-maximizing) equilibrium is NP-hard. Other equilibrium notions—communication [11] and certification [12] equilibria—augment the game with a mediator that has the power to both send and receive messages to and from the players—and, in particular, to remember the messages. In this paper, we investigate both notions in extensive-form games from a computational lens. We show that optimal equilibria in both notions can be computed in polynomial time, the latter under a natural additional assumption known in the literature. Our proof works by constructing a mediator-augmented game of polynomial size that explicitly represents the mediator’s decisions and actions. Our framework allows us to define an entire family of equilibria by varying the mediator’s information partition, the players’ ability to lie, and the players’ ability to deviate. From this perspective, we show that other notions of equilibrium, such as extensive-form correlated equilibrium, correspond to the mediator having imperfect recall. This shows that, at least among all these equilibrium notions, the hardness of computation is driven by the mediator’s imperfect recall. As special cases of our general construction, we recover 1) the polynomial-time algorithm of Conitzer and Sandholm [8] for automated mechanism design in Bayes-Nash equilibria and 2) the correlation DAG algorithm of Zhang et al. [31] for optimal correlation. Our algorithm is especially scalable when the equilibrium notion is what we define as the full-certification equilibrium, where players cannot lie about their information but they can be silent. We back up our theoretical claims with experiments on a suite of standard benchmark games. 1 Introduction Various equilibrium notions in general-sum extensive-form games are used to describe situations where the players have access to a trusted third-party mediator, who can communicate with the players. Depending on the power of the mediator and the form of communication, these notions include the normal-form [1] and extensive-form correlated equilibrium (NFCE and EFCE) [29], the normal-form [25] and extensive-form [10] coarse-correlated equilibrium (NFCCE and EFCCE), the communication equilibrium [11], and the certification equilibrium [12]. Several of these notions, in particular the EFCE and EFCCE, were defined for mainly computational reasons: the EFCE as a computationally-reasonable relaxation to NFCE, and the EFCCE as a computationally-reasonable relaxation of EFCE. When the goal is to compute a single correlated equilibrium, these relaxations are helpful: there are polynomial-time algorithms for computing an EFCE [16]. However, from the perspective of computing optimal equilibria—that is, equilibria that 36th Conference on Neural Information Processing Systems (NeurIPS 2022). maximize the expected value of a given function, such as the social welfare—even these relaxations fall short: for all of the correlation notions above, computing an optimal equilibrium of an extensive-form game is NP-hard [29, 10]. On the other hand, notions of equilibrium involving communication in games have arisen. These differ from the notions of correlation in that the mediator can receive and remember information from the players, and therefore pass information between players as necessary to back up their suggestions. Certification equilibria [12] further strengthen communication equilibria by allowing players to prove certain information to the mediator. To our knowledge, the computational complexity of optimal communication or certification equilibria has never been studied. We do so in this paper. The main technical result of our paper is a polynomial-time algorithm for computing optimal communication and certification equilibria (the latter under a certain natural condition about what messages the players can send). This stands in stark contrast to the notions of correlation discussed above. To prove our main result, we define a general class of mediator-augmented games, each having polynomial size, that is sufficient to describe all of the above notions of equilibrium except the NFCE1. We also build on this main result in several ways. 1. We define the full-certification equilibrium, which is the special case in which players cannot lie to the mediator (but can opt out of revealing their information). In this case, the algorithm is a linear program whose size is almost linear in the size of the original game. As such, this special case scales extremely well compared to all of the other notions. 2. We formalize notions for incorporating payments in the language of our augmented game. By using payments, mediators can incentivize players to play differently than they otherwise would, possibly to the benefit of the mediator’s utility function. 3. We define an entire family of equilibria using our augmented game, that includes as special cases the communication equilibrium, certification equilibrium, NFCCE, EFCCE, and EFCE. From this perspective, we show that other notions of equilibrium, such as extensiveform correlated equilibrium, correspond to the mediator having imperfect recall. This shows that, at least among all these equilibrium notions, the hardness of computation is driven by the mediator’s imperfect recall. We argue that, for this reason, many stated practical applications of correlated equilibria should actually be using communication or certification equilibria instead, which are both easier to compute (in theory, at least) and better at modelling the decision-making process of a rational mediator. 4. We empirically verify the above claims via experiments on a standard set of game instances. Applications and related work. Correlated and communication equilibria have various applications that have been well-documented. Here, we discuss just a few of them, as motivation for our paper. For further discussion of related work, especially relating to automated dynamic mechanism design and persuasion, see Appendix F. Bargaining, negotiation, and conflict resolution [4, 9]. Two parties with asymmetric information wish to arrive at an agreement, say, the price of an item. A mediator, such as a central third-party marketplace, does not know the players’ information but can communicate with the players. Crowdsourcing and ridesharing [13, 22, 31]. A group of players each has individual goals (e.g., to make money by serving customers at specific locations). The players are coordinated by a central party (e.g., a ridesharing company) that has more information than any one of the players, but the players are free to ignore recommendations if they so choose. Persuasion in games [17, 3, 23, 14, 30]. The mediator (in that literature, usually “sender”) has more information than the players (“receivers”), and wishes to tell information to the receivers so as to persuade them to act in a certain way. Automated mechanism design [6, 8, 33, 35, 26, 34, 18, 19]. Players have private information unknown to the mediator. The mediator wishes to commit to a strategy—that is, set a mechanism— such that players are incentivized to honestly reveal their information. In fact, in Appendix E we will see that we recover the polynomial-time Bayes-Nash randomized mechanism design algorithm of [6, 8] as a special case of our main result. 1We do not consider the NFCE, because it breaks our paradigm, which enforces that the mediator’s recommendation be a single action. In NFCE, the whole strategy needs to be revealed upfront. It is an open question whether it is possible to even find one NFCE in polynomial time, not to mention an optimal one. Some of the above examples are often used to motivate correlated equilibria. However, when the mediator is a rational agent with the ability to remember information that it is told and pass the information between players as necessary, we will argue that communication or certification equilibrium should be the notion of choice, for both conceptual and computational reasons. 2 Preliminaries In this section, we discuss background on correlation in extensive-form games. Extensive-form games. An extensive-form game Γ with n players consists of the following. 1. A directed tree of nodes or histories H, whose root is denoted ∅. The depth of the tree will be denoted T . The edges out of nodes are labeled with actions, and the set of such actions will be denoted Ah. Given a node h ∈ H and action a at h, the child reached by following action a at node h is denoted ha. The set of terminal (leaf) nodes inH is denoted Z . Terminal nodes will always be denoted z throughout the paper. 2. A partitionH \ Z = HC tH1 t · · · t . . .Hn of nodes, whereHi is the set of all nodes at which player i plays and playerHC is the set of chance nodes. 3. For each player i, a partition Ii of player i’s decision nodes, Hi, into information sets or infosets. Every node in a given information set I must have the same set of actions, denoted AI . We will call the partition I = I1 t · · · t In the players’ information partition. 4. For each player i, a utility vector ui ∈ [0, 1]Z , where ui[z] denotes the utility achieved by player i at terminal node z. 5. For each chance node h ∈ HC, a probability distribution p(·|h) over the children of h. The sequence σi(h) is the list of infosets reached by player i, and actions taken by the player i at those infosets, on the ∅ → h path, not including the infoset at h itself (if any). We will assume that each player has perfect recall—that is, for each infoset I , the sequence of the player acting at I should be the same for each node in I . We will denote this sequence σ(I). In perfect-recall games, nonempty sequences will be identified by the last infoset-action pair Ia in them. We also will assume that games are timeable and fixed-turn-order, that is, information sets do not span multiple levels of the tree, and all nodes in the same layer of the tree belong to the same player2. We will use the following notation. The relation denotes the natural precedence order induced by the tree H: we write h h′ means that h is an ancestor of h′ (or h = h′), and for sets S, S′, we say S S′ if there are some h ∈ S, h′ ∈ S′ such that h h′. The binary operation ∧ denotes the lowest common ancestor: h ∧ h′ is the lowest node u such that u h, h′. For sequences, σ(h) = (σ1(h), . . . , σn(h)) denotes the joint sequence of all players at node h. N(σ) denotes the set of possible next infosets following sequence σ, that is, N(σ) = {I : σ(I) = σ}. The set Σi denotes the set of sequences of player i, and Σ denotes the set of all sequences across all players (i.e., Σ = tiΣi). A pure strategy for a player i is a selection of one action for each information set I ∈ Ii. A pure profile is a tuple of pure strategies. A correlated profile is a distribution over pure profiles. We will generally work with strategies in realization form (see e.g., Koller et al. [20]). Given a pure strategy x, we say that x plays to z ∈ Z if x plays every action on the ∅ → z path. We will call the vector x ∈ {0, 1}Z the realization form of x. The realization form of a mixed strategy is the appropriate convex combination. The set of mixed strategies forms a convex subset of RZ that, so long as the player has perfect recall, can be expressed using linearly many constraints and variables. We will occasionally need to discuss changing information partitions of Γ. If J = J1 t · · · t Jn is another valid information partition, we will use ΓJ to denote the game Γ with its information partition replaced byJ . We will also occasionally need to talk about multiple games simultaneously; where this is the case, we will mark attributes of the game the same as the game itself. For example, Ĥ is the node set of game Γ̂. 2Timeability is not without loss of generality, but any game for which the precedence order is a partial order over infosets can be converted to a timeable game by adding dummy nodes. Given timeability, fixed-turnorder is without loss of generality, also by adding dummy nodes Communication and certification equilibria. Here, we review definitions related to communication equilibria, following Forges [11] and later related papers. Definition 2.1. Let S be a space of possible messages. A pure mediator strategy is a map d : S≤T → S, where S≤T denotes the set of sequences in S of length at most T . A randomized mediator strategy (hereafter simply mediator strategy) is a distribution over pure mediator strategies. We will assume that the space of possible messages is large, but not exponentially so. In particular, we will assume that {⊥} ∪ I ∪ ⋃ hAh ⊆ S (i.e., messages can at least be nothing, information, or actions)3 and that |S| ≤ poly(|H|). The latter assumption is mostly for cleanliness in stating results: we will give algorithms that need S as an input that we wish to run in time poly(|H|). A mediator strategy augments a game as follows. If the strategy is randomized, it first samples a pure strategy d, which is hidden from the players. At each timestep t, a player reaches a history h at which she must act, and observes the infoset I 3 h. She sends a message st ∈ S to the mediator. The mediator then sends a response d(s1, . . . , st), which depends on the message st as well as the messages sent by all other players prior to timestep t. Then, the player chooses her action a ∈ Ah. We will call the sequence of messages sent and received between the mediator and player i, the transcript with player i. A communication equilibrium4 is a Nash equilibrium of the game Γ augmented with a mediator strategy. The mediator is allowed to perform arbitrary communication with the players. In particular, the mediator is allowed to pass information from one player to another. Further, the players are free to send whatever messages they wish to the mediator, including false or empty messages. These two factors distinguish communication equilibria from notions of correlated equilibria. In Section 3.4 we will discuss this comparison in greater detail. A useful property in the literature on communication equilibria is the revelation principle (e.g., [11]). Informally, the revelation principle states that any outcome achievable by an arbitrary strategy profile can also be achieved by a direct strategy profile, in which the players tell the mediator all their information and are subsequently directly told by the mediator which action to play. In order to be a communication equilibrium, the players still must not have any incentive to deviate from the protocol. That is, the equilibrium must be robust to all messages that a player may attempt to send to the mediator, even if in equilibrium the player always sends the honest message. Forges and Koessler [12] further introduced a form of equilibrium for Bayesian games which they called certification equilibria. In certification equilibria, the messages that a player may legally send are dependent on their information; as such, some messages that a player can send are verifiable. At each information set I ∈ I, let SI ⊆ S denote the set of messages that the player at infoset I may send to the mediator. We will always assume that I ∈ SI and ⊥ ∈ SI for all I . That is, all players always have the options of revealing their true information or revealing nothing. 3 Extensive-form S-certification equilibria The central notion of interest in this paper is a generalization of the notion of certification equilibria [12] to extensive-form games. Definition 3.1. Given an extensive-form game Γ and a family of valid message sets S = {SI : I ∈ I}, an S-certification equilibrium is a Nash equilibrium of the game augmented by a randomized mediator, in which each player at each information set I is restricted to sending a message s ∈ SI . The existence of S-certification equilibria follows from the existence of Nash equilibria, which are the special case where the mediator does nothing. We will need one extra condition on the message sets, which is known as the nested range condition (NRC) [15]: if I ∈ SI′ , then SI ⊆ SI′ . That is, if a player with information I ′ can lie by pretending to have information I , then that player can also emulate any other message she would have been 3A priori, although the messages are given these names, they carry no semantic meaning. The revelation principle is used to assign natural meaning to the messages. 4Previous models of communication in games [11, 12] usually worked with a model in which players send messages, receive messages, and play moves simultaneously, rather than in sequence as in the extensive-game model that we use. The simultaneous-move model is easy to recreate in extensive form: by adding further “dummy nodes” at which players learn information but only have one legal action, we can effectively re-order when players ought to communicate their information to the mediator. able to send at I . Equivalently, the honest message I should be the most certifiable message that a player can send at infoset I . Our main result is the following. Theorem 3.2. Let uM ∈ RZ be an arbitrary utility vector for the mediator. Then there is a polynomial-time algorithm that, given a game Γ and a message set family S satisfying the nested range condition, computes an optimal S-certification equilibrium, that is, one that maximizes Ez uM[z] where the expectation is over playouts of the game under equilibrium. In particular, by setting SI = S for all I , Theorem 3.2 implies that optimal communication equilibria can be computed in polynomial time. The rest of the paper is organized as follows. First, we will prove our main theorem. Along the way, we will demonstrate a form of revelation principle for S-certification equilibria. We will then discuss comparisons to other known forms of equilibrium, including the extensive-form correlated equilibrium [29], and several other natural extensions of our model. Finally, we will show experimental results that compare the computational efficiency and social welfare of various notions of equilibrium on some experimental game instances. 3.1 Proof of Theorem 3.2: The single-deviator mediator-augmented game In this section, we construct a game Γ̂, with n + 1 players, that describes the game Γ where the mediator has been added as an explicit player. This game has similar structure to the one used by Forges [11, Corollary 2], but, critically, has size polynomial in |H|. This is due to two critical differences. First, the players are assumed to either send ⊥, or send messages that mediator cannot immediately prove to be off-equilibrium. In particular, if the player’s last message was I and the mediator recommended action a at I , the player must send a message I ′ with σ(I ′) = Ia. If this is impossible, the player must send ⊥. Therefore, in particular, we will assume that SI consists of only ⊥ and information sets I ′ at the same level as I . Second, only one player is allowed to deviate. Therefore, the strategy of the mediator is not defined in cases where two or more players deviate. We now formalize Γ̂. Nodes in Γ̂ will be identified by tuples (h, τ , r) where h ∈ H is a history in Γ, τ = (τ1, . . . , τn) is the collection of transcripts with all players, and r ∈ {REV, REC, ACT} is a stage marker that denotes whether the current state is one in which a player should be revealing information (REV), the mediator should be recommending a move (REC), or the player should be selecting an action (ACT). The progression of Γ̂ is then defined as follows. We will use the notation τ [i·s] to denote appending message s to τi. • The root node of Γ̂ is (∅, (∅, . . . ,∅), REV). • Nodes (z, τ , REV) for z ∈ Z are also terminal in Γ. The mediator gets utility uM[z], where u is the mediator’s utility function as in Theorem 3.2. All other players i get utility ui[z]. • Nodes (h, τ , REV) for non-terminal h are decision nodes for the player i who acts at h. 1. If i is chance, there is one valid transition, to (h, τ , ACT). 2. If some other player j 6= i has already deviated (i.e., σj(h) 6= τj)), there is one valid transition, to (h, τ [i·I], REC) where I 3 h. 3. If player i has deviated or no one has deviated, then player i observes the infoset I 3 h, and selects a legal message I ′ ∈ SI ∩ ({⊥} ∪N(τi)) to send to the mediator5. Transition to (h, τ [i·I ′], REV). • At (h, τ , REC) where h ∈ Hi, the mediator observes the transcript τi and makes a recommendation a. If τi contains any ⊥ messages, then a = ⊥. Otherwise, a is a legal action a ∈ AI , where I is the most recent message in τi. Transition to (h, τ [i·a], ACT). • Nodes (h, τ , ACT) for non-terminal h are decision nodes for the player i who acts at h. 1. If i is chance, then chance samples a random action a ∼ p(·|h). Transition to (ha, τ , REV). 2. If some other player j 6= i has already deviated, there is one valid transition, to (ha, τ , REC), where a is the action sent by the mediator. 5If τi contains any ⊥ messages, then we take N(τi) = ∅ 3. If player i has deviated or no one has deviated, then player i observes the transcript τi, and selects an action a′ ∈ Ah. Transition to (ha′, τ , REV). The action a′ need not be the recommended action. Since at most one player can ever deviate by construction, and the length of the transcripts are fixed because turn order is common knowledge, the transcripts τ can be identified with sequences σi of the deviated player, if any. We will make this identification: we will use the shorthand hσi to denote the history (h, (σ−i(h), σi), REV), and h⊥ for (h,σ(h), REV) (i.e., no one has deviated yet). Therefore, in particular, this game has at most O(|H||Σ|) histories. For each non-mediator player, there is a well-defined direct strategy x̂∗i for that player: always report her true information I 3 h, and always play the action recommended by the mediator. The goal of the mediator is to find a strategy x̂M for itself that maximizes its expected utility, subject to the constraint that each player’s direct strategy is a best response—that is, find x̂M such that (x̂M, x̂ ∗ 1, . . . , x̂ ∗ n) is a (strong) Stackelberg equilibrium of Γ̂. We claim that finding a mediator strategy x̂M that is a strong Stackelberg equilibrium in Γ̂ is equivalent to finding an optimal S-certification equilibrium in Γ. We prove this in two parts. First, we prove a version of the revelation principle for S-certification equilibria. Definition 3.3. An S-certification equilibrium is direct if it satisfies the following two properties. 1. (Mediator directness) If the transcript τi of a player i is exactly some sequence of player i, and player i sends an infoset I with σ(I) = τi, then the mediator replies with an action a ∈ AI . Otherwise6, the mediator replies ⊥. 2. (Player directness) In equilibrium, players always send their true information I , and, upon receiving an action a ∈ AI , always play that action. Proposition 3.4 (Revelation principle for S-certification equilibria under NRC). Assume that S satisfies the nested range condition. For any S-certification equilibrium, there is a realizationequivalent direct equilibrium. Omitted proofs can be found in the appendix. Since direct mediator strategies are exactly the mediator strategies in Γ̂, and the player strategies are only limited versions of what they are allowed to do in S-certification equilibrium, this implies that, for any S-certification equilibrium, there is a mediator strategy x̂M in Γ̂ such that (x̂M, x̂∗1, . . . , x̂ ∗ n) is a Stackelberg equilibrium. We will also need the converse of this statement. Proposition 3.5. Let x̂M be a strategy for the mediator in Γ̂ such that, in the strategy profile (x̂M, x̂ ∗ 1, . . . , x̂ ∗ n), every x̂ ∗ i for i 6= M is a best response. Then there is an direct S-certification equilibrium that is realization-equivalent to (x̂M, x̂∗1, . . . , x̂ ∗ n). Therefore, we have shown that the mediator strategies x̂M in Γ̂ for which (x̂M, x̂∗1, . . . , x̂ ∗ n) is a Stackelberg equilibrium in Γ̂ correspond exactly to optimal S-certification equilibria of Γ. Such a Stackelberg equilibrium can be found by solving the following program: max x̂M∈X̂M ∑ ẑ∈Ẑ x̂M[ẑ]ûM[ẑ]p̂(ẑ) ∏ i∈[n] x̂∗i [ẑ] s.t. max x̂′j∈X̂j ∑ ẑ∈Ẑ x̂M[ẑ]ûi[ẑ]p̂(ẑ) ( x̂′j [ẑ]− x̂∗j [ẑ] )∏ i 6=j x̂∗i [ẑ] ≤ 0 ∀j ∈ [n] (1) where X̂i is the sequence-form strategy space [20] of player i in Γ̂. The only variables in the program are x̂i for each player i and the mediator. In particular, the direct strategies x̂∗i are constants. Therefore, the objective is a linear function, and the inner maximization constraints are bilinear in x̂M and x̂j . Therefore, this program can be converted to a linear program by dualizing the inner optimizations. For more details on this conversion, see Appendix B. The result is a linear program of size O(n|Ĥ| ) = O(n|H||Σ|). We have thus proved Theorem 3.2. 6This condition is necessary because, if the mediator does not know what infoset the player is in, the mediator may not be able to send the player a valid action, because action sets may differ by infoset. 3.2 Extensions and special cases In this section, we describe several extensions and interesting special cases of our main result. Full-certification equilibria. One particular special case of S-certification equilibria which is particularly useful. We define a full-certification equilibrium as an S-certification equilibrium where SI = {⊥, I}. Intuitively, this means that players cannot lie to the mediator, but they may withhold information. We will call such an equilibrium full-certification. Removing valid messages from the players only reduces their ability to deviate and thus increases the space of possible equilibrium strategies. As such, the full-certification equilibria are the largest class of S-certification equilibria. For full-certification equilibria, the size of game Γ̂ reduces dramatically. Indeed, in all histories hIa of Γ̂, we must have I h. Therefore, we have |Ĥ| ≤ |H|BD where B is the maximum branching factor and D is the depth of the game tree, i.e., the size of Γ̂ goes from essentially quadratic to essentially quasilinear in |H|. The mediator’s decision points in Γ̂ for a full-certification equilibrium are the trigger histories used by Zhang et al. [31] in their analysis of various notions of correlated equilibria. Later, we will draw further connections between full certification and correlation. Changing the mediator’s information. In certain cases, the mediator, in addition to messages that it is sent by the players, also has its own observations about the world. These are trivial to incorporate into our model: simply change the information partition of the mediator in Γ̂ as needed. Alternatively, one can imagine adding a “player”, with no rewards (hence no incentive to deviate), whose sole purpose is to observe information and pass it to the mediator. For purposes of keeping the game small, it is easier to adopt the former method. To this end, consider any refinement partition M of the mediator infosets in Γ̂, and consider the game Γ̂M created by replacing the mediator’s information partition in Γ̂ withM. Then we make the following definition. Definition 3.6. An (S,M)-certification equilibrium of Γ is a mediator strategy x̂M in Γ̂M such that, in the strategy profile (x̂M, x̂∗1, . . . , x̂ ∗ n), every x ∗ i for i 6= M is a best response. (S,M)-certification equilibria may not exist: indeed, ifM is coarser than the mediator’s original information partition in Γ̂, then the mediator may not have enough information to provide good recommendations under the restrictions of Γ̂. This can be remedied by allowing payments (see Appendix E), or by making the assumption that the mediator at least knows the transcript of the player to whom she is making any nontrivial recommendation: Definition 3.7. A mediator partitionM is direct if, at every mediator decision point (h, τ , REC), so long as |Ah| > 1, the mediator knows the transcript of the player acting at h.M is strongly direct if the mediator also observes the transcript when |Ah| = 1. The condition |Ah| > 1 in the definition allows the mediator to possibly not observe the full information of a player if she does not need to make a nontrivial recommendation to that player. In particular, this allows players to sometimes have information that they only partially reveal to the mediator, so long as the player does not immediately need to act on such information. Coarseness. In literature on correlation, coarseness refers to the restriction that a player must obey any recommendation that she receives (but may choose to deviate by not requesting a recommendation and instead playing any other action). Normal-form coarseness further adds the restriction that players can only choose to deviate at the start of the game—the mediator essentially takes over and plays the game on behalf of non-deviating players. These notions can easily be expressed in terms of our augmented game, therefore also allowing us to express coarse versions of our equilibrium notions as augmented games. 3.3 The gap between polynomial and not polynomial If players cannot send messages to the mediator at all, and the mediator has no other way of gaining any information, we recover the notion of autonomous correlated equilibrium (ACE). It is NP-hard to compute optimal ACE, even in Bayesian games (see e.g., von Stengel and Forges [29]). WhenM is direct and perfect recall, computing an optimal direct (S,M)-certification equilibrium can be done in polynomial time using our framework. When S obeys NRC and M satisfies a stronger condition7, the proof of the revelation principle (Propositions 3.4 and 3.5) works, and the resulting equilibrium is guaranteed to be optimal over all possible equilibria including those that may not be direct. If NRC does not hold, one can still solve the program (1), and the solution is still guaranteed to be an optimal direct equilibrium by Proposition 3.5. However, it is not guaranteed to be optimal over all possible communication structures. Indeed, Green and Laffont [15, Theorem 1] give an instance in which, without NRC, there can be an outcome distribution that is not implementable by a direct mediator. Our program cannot find such an outcome distribution. The counterexample does not preclude the possibility of efficient algorithms for finding optimal certification equilibria in more general cases, but does give intuition for why NRC is crucial to our construction. We could also consider changing the mediator’s information partition so that the mediator does not have perfect recall. This transformation allows us to recover notions of correlation in games. Indeed, if we start from the full-certification equilibrium and only allow the mediator to remember the transcript with the player she is currently talking to, we recover EFCE. Adding coarseness similarly recovers EFCCE and NFCCE. In this setting, the inability to represent the strategy space of an imperfect-recall player may result in the loss of efficient algorithms. 3.4 A family of equilibria By varying 1) what the mediator observes, 2) whether the mediator has perfect recall, 3) whether the players can lie or only withhold information, and 4) when and how players can deviate from the mediator’s recommended actions, we can use our framework to define a family consisting of 16 conceptually different equilibrium notions. More can be generated by considering other variations in this design space, but we focus on the extreme cases in the table. Some of these were already defined in the literature; the remaining names are ours. The result is Table 1. An inclusion diagram for these notions can be found in Appendix G. In the table, ex ante means that players have only a binary choice between deviating (in which case they can play whatever they want) and playing (in which case they must always be direct and obey recommendations). With ex ante deviations, it does not matter whether lying is allowed because we can never get to that stage: either the player deviates immediately and never communicates with the mediator, or the player is direct. If the mediator only remembers the current active player’s information, and players cannot lie, withholding and coarsely deviating are the same. Mediator information advantage means that the mediator always learns the infoset of the current active player, and therefore requires no messages from the players. This is equivalent to forcing players to truthfully report information. A mediator with information advantage may still not have perfect information—for example, it will not know whether a player (or nature) has played an action until some other player observes the action. In this setting, the mediator may also have extra private information (known to none of the players), leading to the setting of Bayesian persuasion [17]. In extensive-form games, there are two different reasonable notions of persuasion: one that stems from extending correlated equilibria, and one that stems from extending communication equilibria. The distinction is that, in the former, the mediator has imperfect recall. For a more in-depth discussion of Bayesian persuasion, see Appendix F. Our framework allows optimal equilibria for all notions in the table to be computed. For perfectrecall mediators, this is possible in polynomial time via the sequence form; for imperfect-recall mediators, the problem is NP-hard, in general, but the team belief DAG of Zhang et al. [32] can be used to recover fixed-parameter algorithms. For the notions of correlated equilibrium, this method results in basically the same LP as the correlation DAG of Zhang et al. [31]. We do not claim that all of these notions are easy to motivate. For example, correlated equilibria are usually arrived at in the “truth known, imperfect recall” setting; the correlated equilibrium notions where lying is allowed are more difficult to motivate in this respect. Further, even the fixedparameter algorithms of Zhang et al. [31] would fail in this setting, because “public states” can no longer be treated as public due to the possibility of lying players. We leave to future research the problem of finding a motivation for the notions that we do not reference elsewhere in the paper. 7Roughly speaking, this condition is that players should not be able to cause the mediator to gain information apart from their own messages by sending messages. It holds for all notions we discuss in this paper. Formalizing the general case is beyond the scope of this paper. 4 Experiments We ran our algorithm for communication and full-certification equilibria on various two-player games, and compared the results to those given by notions of optimal correlation in games. The games used in the experiments are given in Appendix D. All experiments were allocated four CPU cores and 64 GB of RAM. Linear programs were solved with Gurobi 9.5. When payments are used, the allowable payment range is [0,M ] where M is the reward range of the game. Experimental results can be found in Table 2. In the battleship and sheriff instances, there is not a significant difference in performance between finding full-certification equilibria and finding optimal correlated equilibria in terms of performance—this is because, unlike in the general case, optimal correlated equilibria in two-player games without chance can be found in polynomial time [29] anyway. In the ridesharing instances, computing optimal correlated equilibria is much more computationally intensive because the game contains non-public chance actions. Computing optimal full-certification equilibria is comparably easy, and this difference is clearly seen in the timing results. Finding optimal communication equilibria is much more intensive than finding optimal fullcertification equilibria, owing to the quadratic size of the augmented game for communication equilibria. This often causes communication equilibria to be the hardest of the notions to compute in practice, despite optimal correlation being NP-hard. In Figure 1, we have plotted the payoff spaces of some representative instances. The plots show how the polytopes of communication and full-certification equilibria behave relative to correlated equilibria. In the battleship and sheriff instances, the space of communication equilibrium payoffs is a single point, which implies that the space of NFCE (and hence Nash) equilibrium payoffs is also that single point. Unfortunately, that point is the Pareto-least-optimal point in the space of EFCEs. In the ridesharing instances, communication allows higher payoffs. This is because the mediator is allowed to “leak” information between players. 5 Conclusions and future research We have shown that optimal communication and certification equilibria in extensive-form games can be computed via linear programs of polynomial size, or almost-linear size in the full-certification case. We have used our machinery to derive an entire family of equilibrium concepts which we hope to be of use in the future. Possible future directions include the following. 1. Are there efficient online learning dynamics, in any reasonable sense of that term, that converge to certification or communication equilibrium? 2. Is there a better-than-quadratic-size linear program for communication equilibria? 3. Is it possible to extend our augmented game construction to also cover normal-form corre- lated equilibria while maintaining efficiency? 4. Investigate further the comparison between communication and correlation in games. For example, when and why do communication equilibria achieve higher social welfare than extensive-form correlated equilibria? Acknowledgements This material is based on work supported by the National Science Foundation under grants IIS-1901403 and CCF-1733556, and the ARO under award W911NF2010081.
1. What is the focus of the paper regarding correlated equilibrium in sequential games? 2. What are the strengths of the proposed approach, particularly in its originality and correctness? 3. What are the weaknesses of the paper regarding its exposition and lack of examples? 4. How does the reviewer assess the significance of the work and its suitability for different conferences or journals? 5. What additional questions does the reviewer have regarding the paper's content and potential applications?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Many papers were dedicated to the study of correlated equilibrium in sequential games in recent years. In this work, the authors argue that focusing on (variants of) correlated equilibrium is too restrictive, as the inherent NP-hardness of finding optimal equilibria precludes them from being used in real-world applications. Because the root of the problem appears to be the imperfect recall of the mediator, i.e. the correlation device, they suggest studying a more general class of mediators, capable of not just sending messages to players, but also receiving them, effectively broadcasting some information. The main result presented in this paper claims that under the assumption that a certain property of signal sets holds, computing an optimal certification equilibrium in sequential games is a polynomial-time problem. As an intermediate result the authors obtain also an analogue of the revelation principle in their setting. Next, the authors introduce two special cases of the certification equilibria, allowing for faster computation (in case of the full-certification equilibria) or incorporation of the mediator’s own observations into their decision-making process (in case of the (S,M)-certification equilibrium). After a short section dedicated to a construction of the entire family of solution concepts based on the mediator’s recall and observations and the players’ abilities to lie and deviate, recovering several correlated equilibria as special cases, the authors move to the empirical evaluation. The main outcome of the empirical evaluation appears to be that when the mediator seeks to maximize social welfare, in the three classes of games examined by the authors, the algorithm computing the certification equilibria provides higher-quality solutions (in general) in a shorter time than the correlated equilibria. Strengths And Weaknesses The idea this paper examines seems extremely intriguing to me. The motivation is clear, and the connections the authors make between different solution concepts and the argumentation about their application validity and computability presents an interesting point of view which I am inclined to mostly agree with. As far as I can tell, after going through the appendix as well, the results seem to be original and correct. The experiments further corroborate the claims, letting the reader compare both the optimal values and the computational times, and the way how the authors visualize the results is coherent and easily comprehensible. My main concern relates to this work’s overall exposition, though. The field of algorithmic game theory, and sequential games in particular, often requires a careful buildup, patiently guiding the reader through all the introduced concepts, because of their inherent complexity and an abundance of notation. Especially when discussing multiple types of equilibria. The authors certainly do not seem to be at fault here, but NeurIPS’ space limit does not appear to provide enough space for it. Because of that, even as someone fairly acquainted with the field, I had to reread several parts multiple times to understand what the authors are perhaps trying to say and how do the individual solution concepts they discuss relate. The absence of any examples besides the briefly described Fig1 further confuses the reader. For example, it took me a while to understand even why Theorem 3.2 immediately implies the polynomial computability of optimal communication equilibrium when it speaks about certification equilibria. Or, why program (1) is indeed linear. If I understand it well the players’ optimal strategies are basically projections from the mediator’s pure strategy space, is that correct? Many details are missing also from the experimental section, e.g., not explaining even informally which domains were employed and why mediator concepts make sense in those situations, literally referring the reader not even to the appendix, but to a completely different work. To sum up, I feel like this work puts me in a difficult position: on one hand, I enjoyed the ideas the authors are presenting, but on the other it may be more suitable for a conference with a more generous space limit, or some journal. I vote to accept it as I believe that more examples and discussions could be put in the appendix (even though it is not optimal), and I look forward to further discussing these issues with the authors and fellow reviewers in the next reviewing phase. Questions To strengthen the intuition about the presented concepts, could the authors give an example of some related setting (besides the case of imperfect recall of the mediator) that could NOT be modeled using a polynomially-computable certification equilibrium, e.g., because the NRC does not hold? How robust are the results with respect to the optimized function? For example, do the certification equilibria perform well even when the mediator optimizes their own (let’s say) random utility? How large are the games used in the experiments, i.e. with how many information sets? How do the algorithms scale with the number of players? Could, e.g., the full-certification equilibrium be computed even in some moderately-sized game with four or five players? Limitations I could not think of any other limitation not addressed already by the authors.
NIPS
Title Polynomial-Time Optimal Equilibria with a Mediator in Extensive-Form Games Abstract For common notions of correlated equilibrium in extensive-form games, computing an optimal (e.g., welfare-maximizing) equilibrium is NP-hard. Other equilibrium notions—communication [11] and certification [12] equilibria—augment the game with a mediator that has the power to both send and receive messages to and from the players—and, in particular, to remember the messages. In this paper, we investigate both notions in extensive-form games from a computational lens. We show that optimal equilibria in both notions can be computed in polynomial time, the latter under a natural additional assumption known in the literature. Our proof works by constructing a mediator-augmented game of polynomial size that explicitly represents the mediator’s decisions and actions. Our framework allows us to define an entire family of equilibria by varying the mediator’s information partition, the players’ ability to lie, and the players’ ability to deviate. From this perspective, we show that other notions of equilibrium, such as extensive-form correlated equilibrium, correspond to the mediator having imperfect recall. This shows that, at least among all these equilibrium notions, the hardness of computation is driven by the mediator’s imperfect recall. As special cases of our general construction, we recover 1) the polynomial-time algorithm of Conitzer and Sandholm [8] for automated mechanism design in Bayes-Nash equilibria and 2) the correlation DAG algorithm of Zhang et al. [31] for optimal correlation. Our algorithm is especially scalable when the equilibrium notion is what we define as the full-certification equilibrium, where players cannot lie about their information but they can be silent. We back up our theoretical claims with experiments on a suite of standard benchmark games. 1 Introduction Various equilibrium notions in general-sum extensive-form games are used to describe situations where the players have access to a trusted third-party mediator, who can communicate with the players. Depending on the power of the mediator and the form of communication, these notions include the normal-form [1] and extensive-form correlated equilibrium (NFCE and EFCE) [29], the normal-form [25] and extensive-form [10] coarse-correlated equilibrium (NFCCE and EFCCE), the communication equilibrium [11], and the certification equilibrium [12]. Several of these notions, in particular the EFCE and EFCCE, were defined for mainly computational reasons: the EFCE as a computationally-reasonable relaxation to NFCE, and the EFCCE as a computationally-reasonable relaxation of EFCE. When the goal is to compute a single correlated equilibrium, these relaxations are helpful: there are polynomial-time algorithms for computing an EFCE [16]. However, from the perspective of computing optimal equilibria—that is, equilibria that 36th Conference on Neural Information Processing Systems (NeurIPS 2022). maximize the expected value of a given function, such as the social welfare—even these relaxations fall short: for all of the correlation notions above, computing an optimal equilibrium of an extensive-form game is NP-hard [29, 10]. On the other hand, notions of equilibrium involving communication in games have arisen. These differ from the notions of correlation in that the mediator can receive and remember information from the players, and therefore pass information between players as necessary to back up their suggestions. Certification equilibria [12] further strengthen communication equilibria by allowing players to prove certain information to the mediator. To our knowledge, the computational complexity of optimal communication or certification equilibria has never been studied. We do so in this paper. The main technical result of our paper is a polynomial-time algorithm for computing optimal communication and certification equilibria (the latter under a certain natural condition about what messages the players can send). This stands in stark contrast to the notions of correlation discussed above. To prove our main result, we define a general class of mediator-augmented games, each having polynomial size, that is sufficient to describe all of the above notions of equilibrium except the NFCE1. We also build on this main result in several ways. 1. We define the full-certification equilibrium, which is the special case in which players cannot lie to the mediator (but can opt out of revealing their information). In this case, the algorithm is a linear program whose size is almost linear in the size of the original game. As such, this special case scales extremely well compared to all of the other notions. 2. We formalize notions for incorporating payments in the language of our augmented game. By using payments, mediators can incentivize players to play differently than they otherwise would, possibly to the benefit of the mediator’s utility function. 3. We define an entire family of equilibria using our augmented game, that includes as special cases the communication equilibrium, certification equilibrium, NFCCE, EFCCE, and EFCE. From this perspective, we show that other notions of equilibrium, such as extensiveform correlated equilibrium, correspond to the mediator having imperfect recall. This shows that, at least among all these equilibrium notions, the hardness of computation is driven by the mediator’s imperfect recall. We argue that, for this reason, many stated practical applications of correlated equilibria should actually be using communication or certification equilibria instead, which are both easier to compute (in theory, at least) and better at modelling the decision-making process of a rational mediator. 4. We empirically verify the above claims via experiments on a standard set of game instances. Applications and related work. Correlated and communication equilibria have various applications that have been well-documented. Here, we discuss just a few of them, as motivation for our paper. For further discussion of related work, especially relating to automated dynamic mechanism design and persuasion, see Appendix F. Bargaining, negotiation, and conflict resolution [4, 9]. Two parties with asymmetric information wish to arrive at an agreement, say, the price of an item. A mediator, such as a central third-party marketplace, does not know the players’ information but can communicate with the players. Crowdsourcing and ridesharing [13, 22, 31]. A group of players each has individual goals (e.g., to make money by serving customers at specific locations). The players are coordinated by a central party (e.g., a ridesharing company) that has more information than any one of the players, but the players are free to ignore recommendations if they so choose. Persuasion in games [17, 3, 23, 14, 30]. The mediator (in that literature, usually “sender”) has more information than the players (“receivers”), and wishes to tell information to the receivers so as to persuade them to act in a certain way. Automated mechanism design [6, 8, 33, 35, 26, 34, 18, 19]. Players have private information unknown to the mediator. The mediator wishes to commit to a strategy—that is, set a mechanism— such that players are incentivized to honestly reveal their information. In fact, in Appendix E we will see that we recover the polynomial-time Bayes-Nash randomized mechanism design algorithm of [6, 8] as a special case of our main result. 1We do not consider the NFCE, because it breaks our paradigm, which enforces that the mediator’s recommendation be a single action. In NFCE, the whole strategy needs to be revealed upfront. It is an open question whether it is possible to even find one NFCE in polynomial time, not to mention an optimal one. Some of the above examples are often used to motivate correlated equilibria. However, when the mediator is a rational agent with the ability to remember information that it is told and pass the information between players as necessary, we will argue that communication or certification equilibrium should be the notion of choice, for both conceptual and computational reasons. 2 Preliminaries In this section, we discuss background on correlation in extensive-form games. Extensive-form games. An extensive-form game Γ with n players consists of the following. 1. A directed tree of nodes or histories H, whose root is denoted ∅. The depth of the tree will be denoted T . The edges out of nodes are labeled with actions, and the set of such actions will be denoted Ah. Given a node h ∈ H and action a at h, the child reached by following action a at node h is denoted ha. The set of terminal (leaf) nodes inH is denoted Z . Terminal nodes will always be denoted z throughout the paper. 2. A partitionH \ Z = HC tH1 t · · · t . . .Hn of nodes, whereHi is the set of all nodes at which player i plays and playerHC is the set of chance nodes. 3. For each player i, a partition Ii of player i’s decision nodes, Hi, into information sets or infosets. Every node in a given information set I must have the same set of actions, denoted AI . We will call the partition I = I1 t · · · t In the players’ information partition. 4. For each player i, a utility vector ui ∈ [0, 1]Z , where ui[z] denotes the utility achieved by player i at terminal node z. 5. For each chance node h ∈ HC, a probability distribution p(·|h) over the children of h. The sequence σi(h) is the list of infosets reached by player i, and actions taken by the player i at those infosets, on the ∅ → h path, not including the infoset at h itself (if any). We will assume that each player has perfect recall—that is, for each infoset I , the sequence of the player acting at I should be the same for each node in I . We will denote this sequence σ(I). In perfect-recall games, nonempty sequences will be identified by the last infoset-action pair Ia in them. We also will assume that games are timeable and fixed-turn-order, that is, information sets do not span multiple levels of the tree, and all nodes in the same layer of the tree belong to the same player2. We will use the following notation. The relation denotes the natural precedence order induced by the tree H: we write h h′ means that h is an ancestor of h′ (or h = h′), and for sets S, S′, we say S S′ if there are some h ∈ S, h′ ∈ S′ such that h h′. The binary operation ∧ denotes the lowest common ancestor: h ∧ h′ is the lowest node u such that u h, h′. For sequences, σ(h) = (σ1(h), . . . , σn(h)) denotes the joint sequence of all players at node h. N(σ) denotes the set of possible next infosets following sequence σ, that is, N(σ) = {I : σ(I) = σ}. The set Σi denotes the set of sequences of player i, and Σ denotes the set of all sequences across all players (i.e., Σ = tiΣi). A pure strategy for a player i is a selection of one action for each information set I ∈ Ii. A pure profile is a tuple of pure strategies. A correlated profile is a distribution over pure profiles. We will generally work with strategies in realization form (see e.g., Koller et al. [20]). Given a pure strategy x, we say that x plays to z ∈ Z if x plays every action on the ∅ → z path. We will call the vector x ∈ {0, 1}Z the realization form of x. The realization form of a mixed strategy is the appropriate convex combination. The set of mixed strategies forms a convex subset of RZ that, so long as the player has perfect recall, can be expressed using linearly many constraints and variables. We will occasionally need to discuss changing information partitions of Γ. If J = J1 t · · · t Jn is another valid information partition, we will use ΓJ to denote the game Γ with its information partition replaced byJ . We will also occasionally need to talk about multiple games simultaneously; where this is the case, we will mark attributes of the game the same as the game itself. For example, Ĥ is the node set of game Γ̂. 2Timeability is not without loss of generality, but any game for which the precedence order is a partial order over infosets can be converted to a timeable game by adding dummy nodes. Given timeability, fixed-turnorder is without loss of generality, also by adding dummy nodes Communication and certification equilibria. Here, we review definitions related to communication equilibria, following Forges [11] and later related papers. Definition 2.1. Let S be a space of possible messages. A pure mediator strategy is a map d : S≤T → S, where S≤T denotes the set of sequences in S of length at most T . A randomized mediator strategy (hereafter simply mediator strategy) is a distribution over pure mediator strategies. We will assume that the space of possible messages is large, but not exponentially so. In particular, we will assume that {⊥} ∪ I ∪ ⋃ hAh ⊆ S (i.e., messages can at least be nothing, information, or actions)3 and that |S| ≤ poly(|H|). The latter assumption is mostly for cleanliness in stating results: we will give algorithms that need S as an input that we wish to run in time poly(|H|). A mediator strategy augments a game as follows. If the strategy is randomized, it first samples a pure strategy d, which is hidden from the players. At each timestep t, a player reaches a history h at which she must act, and observes the infoset I 3 h. She sends a message st ∈ S to the mediator. The mediator then sends a response d(s1, . . . , st), which depends on the message st as well as the messages sent by all other players prior to timestep t. Then, the player chooses her action a ∈ Ah. We will call the sequence of messages sent and received between the mediator and player i, the transcript with player i. A communication equilibrium4 is a Nash equilibrium of the game Γ augmented with a mediator strategy. The mediator is allowed to perform arbitrary communication with the players. In particular, the mediator is allowed to pass information from one player to another. Further, the players are free to send whatever messages they wish to the mediator, including false or empty messages. These two factors distinguish communication equilibria from notions of correlated equilibria. In Section 3.4 we will discuss this comparison in greater detail. A useful property in the literature on communication equilibria is the revelation principle (e.g., [11]). Informally, the revelation principle states that any outcome achievable by an arbitrary strategy profile can also be achieved by a direct strategy profile, in which the players tell the mediator all their information and are subsequently directly told by the mediator which action to play. In order to be a communication equilibrium, the players still must not have any incentive to deviate from the protocol. That is, the equilibrium must be robust to all messages that a player may attempt to send to the mediator, even if in equilibrium the player always sends the honest message. Forges and Koessler [12] further introduced a form of equilibrium for Bayesian games which they called certification equilibria. In certification equilibria, the messages that a player may legally send are dependent on their information; as such, some messages that a player can send are verifiable. At each information set I ∈ I, let SI ⊆ S denote the set of messages that the player at infoset I may send to the mediator. We will always assume that I ∈ SI and ⊥ ∈ SI for all I . That is, all players always have the options of revealing their true information or revealing nothing. 3 Extensive-form S-certification equilibria The central notion of interest in this paper is a generalization of the notion of certification equilibria [12] to extensive-form games. Definition 3.1. Given an extensive-form game Γ and a family of valid message sets S = {SI : I ∈ I}, an S-certification equilibrium is a Nash equilibrium of the game augmented by a randomized mediator, in which each player at each information set I is restricted to sending a message s ∈ SI . The existence of S-certification equilibria follows from the existence of Nash equilibria, which are the special case where the mediator does nothing. We will need one extra condition on the message sets, which is known as the nested range condition (NRC) [15]: if I ∈ SI′ , then SI ⊆ SI′ . That is, if a player with information I ′ can lie by pretending to have information I , then that player can also emulate any other message she would have been 3A priori, although the messages are given these names, they carry no semantic meaning. The revelation principle is used to assign natural meaning to the messages. 4Previous models of communication in games [11, 12] usually worked with a model in which players send messages, receive messages, and play moves simultaneously, rather than in sequence as in the extensive-game model that we use. The simultaneous-move model is easy to recreate in extensive form: by adding further “dummy nodes” at which players learn information but only have one legal action, we can effectively re-order when players ought to communicate their information to the mediator. able to send at I . Equivalently, the honest message I should be the most certifiable message that a player can send at infoset I . Our main result is the following. Theorem 3.2. Let uM ∈ RZ be an arbitrary utility vector for the mediator. Then there is a polynomial-time algorithm that, given a game Γ and a message set family S satisfying the nested range condition, computes an optimal S-certification equilibrium, that is, one that maximizes Ez uM[z] where the expectation is over playouts of the game under equilibrium. In particular, by setting SI = S for all I , Theorem 3.2 implies that optimal communication equilibria can be computed in polynomial time. The rest of the paper is organized as follows. First, we will prove our main theorem. Along the way, we will demonstrate a form of revelation principle for S-certification equilibria. We will then discuss comparisons to other known forms of equilibrium, including the extensive-form correlated equilibrium [29], and several other natural extensions of our model. Finally, we will show experimental results that compare the computational efficiency and social welfare of various notions of equilibrium on some experimental game instances. 3.1 Proof of Theorem 3.2: The single-deviator mediator-augmented game In this section, we construct a game Γ̂, with n + 1 players, that describes the game Γ where the mediator has been added as an explicit player. This game has similar structure to the one used by Forges [11, Corollary 2], but, critically, has size polynomial in |H|. This is due to two critical differences. First, the players are assumed to either send ⊥, or send messages that mediator cannot immediately prove to be off-equilibrium. In particular, if the player’s last message was I and the mediator recommended action a at I , the player must send a message I ′ with σ(I ′) = Ia. If this is impossible, the player must send ⊥. Therefore, in particular, we will assume that SI consists of only ⊥ and information sets I ′ at the same level as I . Second, only one player is allowed to deviate. Therefore, the strategy of the mediator is not defined in cases where two or more players deviate. We now formalize Γ̂. Nodes in Γ̂ will be identified by tuples (h, τ , r) where h ∈ H is a history in Γ, τ = (τ1, . . . , τn) is the collection of transcripts with all players, and r ∈ {REV, REC, ACT} is a stage marker that denotes whether the current state is one in which a player should be revealing information (REV), the mediator should be recommending a move (REC), or the player should be selecting an action (ACT). The progression of Γ̂ is then defined as follows. We will use the notation τ [i·s] to denote appending message s to τi. • The root node of Γ̂ is (∅, (∅, . . . ,∅), REV). • Nodes (z, τ , REV) for z ∈ Z are also terminal in Γ. The mediator gets utility uM[z], where u is the mediator’s utility function as in Theorem 3.2. All other players i get utility ui[z]. • Nodes (h, τ , REV) for non-terminal h are decision nodes for the player i who acts at h. 1. If i is chance, there is one valid transition, to (h, τ , ACT). 2. If some other player j 6= i has already deviated (i.e., σj(h) 6= τj)), there is one valid transition, to (h, τ [i·I], REC) where I 3 h. 3. If player i has deviated or no one has deviated, then player i observes the infoset I 3 h, and selects a legal message I ′ ∈ SI ∩ ({⊥} ∪N(τi)) to send to the mediator5. Transition to (h, τ [i·I ′], REV). • At (h, τ , REC) where h ∈ Hi, the mediator observes the transcript τi and makes a recommendation a. If τi contains any ⊥ messages, then a = ⊥. Otherwise, a is a legal action a ∈ AI , where I is the most recent message in τi. Transition to (h, τ [i·a], ACT). • Nodes (h, τ , ACT) for non-terminal h are decision nodes for the player i who acts at h. 1. If i is chance, then chance samples a random action a ∼ p(·|h). Transition to (ha, τ , REV). 2. If some other player j 6= i has already deviated, there is one valid transition, to (ha, τ , REC), where a is the action sent by the mediator. 5If τi contains any ⊥ messages, then we take N(τi) = ∅ 3. If player i has deviated or no one has deviated, then player i observes the transcript τi, and selects an action a′ ∈ Ah. Transition to (ha′, τ , REV). The action a′ need not be the recommended action. Since at most one player can ever deviate by construction, and the length of the transcripts are fixed because turn order is common knowledge, the transcripts τ can be identified with sequences σi of the deviated player, if any. We will make this identification: we will use the shorthand hσi to denote the history (h, (σ−i(h), σi), REV), and h⊥ for (h,σ(h), REV) (i.e., no one has deviated yet). Therefore, in particular, this game has at most O(|H||Σ|) histories. For each non-mediator player, there is a well-defined direct strategy x̂∗i for that player: always report her true information I 3 h, and always play the action recommended by the mediator. The goal of the mediator is to find a strategy x̂M for itself that maximizes its expected utility, subject to the constraint that each player’s direct strategy is a best response—that is, find x̂M such that (x̂M, x̂ ∗ 1, . . . , x̂ ∗ n) is a (strong) Stackelberg equilibrium of Γ̂. We claim that finding a mediator strategy x̂M that is a strong Stackelberg equilibrium in Γ̂ is equivalent to finding an optimal S-certification equilibrium in Γ. We prove this in two parts. First, we prove a version of the revelation principle for S-certification equilibria. Definition 3.3. An S-certification equilibrium is direct if it satisfies the following two properties. 1. (Mediator directness) If the transcript τi of a player i is exactly some sequence of player i, and player i sends an infoset I with σ(I) = τi, then the mediator replies with an action a ∈ AI . Otherwise6, the mediator replies ⊥. 2. (Player directness) In equilibrium, players always send their true information I , and, upon receiving an action a ∈ AI , always play that action. Proposition 3.4 (Revelation principle for S-certification equilibria under NRC). Assume that S satisfies the nested range condition. For any S-certification equilibrium, there is a realizationequivalent direct equilibrium. Omitted proofs can be found in the appendix. Since direct mediator strategies are exactly the mediator strategies in Γ̂, and the player strategies are only limited versions of what they are allowed to do in S-certification equilibrium, this implies that, for any S-certification equilibrium, there is a mediator strategy x̂M in Γ̂ such that (x̂M, x̂∗1, . . . , x̂ ∗ n) is a Stackelberg equilibrium. We will also need the converse of this statement. Proposition 3.5. Let x̂M be a strategy for the mediator in Γ̂ such that, in the strategy profile (x̂M, x̂ ∗ 1, . . . , x̂ ∗ n), every x̂ ∗ i for i 6= M is a best response. Then there is an direct S-certification equilibrium that is realization-equivalent to (x̂M, x̂∗1, . . . , x̂ ∗ n). Therefore, we have shown that the mediator strategies x̂M in Γ̂ for which (x̂M, x̂∗1, . . . , x̂ ∗ n) is a Stackelberg equilibrium in Γ̂ correspond exactly to optimal S-certification equilibria of Γ. Such a Stackelberg equilibrium can be found by solving the following program: max x̂M∈X̂M ∑ ẑ∈Ẑ x̂M[ẑ]ûM[ẑ]p̂(ẑ) ∏ i∈[n] x̂∗i [ẑ] s.t. max x̂′j∈X̂j ∑ ẑ∈Ẑ x̂M[ẑ]ûi[ẑ]p̂(ẑ) ( x̂′j [ẑ]− x̂∗j [ẑ] )∏ i 6=j x̂∗i [ẑ] ≤ 0 ∀j ∈ [n] (1) where X̂i is the sequence-form strategy space [20] of player i in Γ̂. The only variables in the program are x̂i for each player i and the mediator. In particular, the direct strategies x̂∗i are constants. Therefore, the objective is a linear function, and the inner maximization constraints are bilinear in x̂M and x̂j . Therefore, this program can be converted to a linear program by dualizing the inner optimizations. For more details on this conversion, see Appendix B. The result is a linear program of size O(n|Ĥ| ) = O(n|H||Σ|). We have thus proved Theorem 3.2. 6This condition is necessary because, if the mediator does not know what infoset the player is in, the mediator may not be able to send the player a valid action, because action sets may differ by infoset. 3.2 Extensions and special cases In this section, we describe several extensions and interesting special cases of our main result. Full-certification equilibria. One particular special case of S-certification equilibria which is particularly useful. We define a full-certification equilibrium as an S-certification equilibrium where SI = {⊥, I}. Intuitively, this means that players cannot lie to the mediator, but they may withhold information. We will call such an equilibrium full-certification. Removing valid messages from the players only reduces their ability to deviate and thus increases the space of possible equilibrium strategies. As such, the full-certification equilibria are the largest class of S-certification equilibria. For full-certification equilibria, the size of game Γ̂ reduces dramatically. Indeed, in all histories hIa of Γ̂, we must have I h. Therefore, we have |Ĥ| ≤ |H|BD where B is the maximum branching factor and D is the depth of the game tree, i.e., the size of Γ̂ goes from essentially quadratic to essentially quasilinear in |H|. The mediator’s decision points in Γ̂ for a full-certification equilibrium are the trigger histories used by Zhang et al. [31] in their analysis of various notions of correlated equilibria. Later, we will draw further connections between full certification and correlation. Changing the mediator’s information. In certain cases, the mediator, in addition to messages that it is sent by the players, also has its own observations about the world. These are trivial to incorporate into our model: simply change the information partition of the mediator in Γ̂ as needed. Alternatively, one can imagine adding a “player”, with no rewards (hence no incentive to deviate), whose sole purpose is to observe information and pass it to the mediator. For purposes of keeping the game small, it is easier to adopt the former method. To this end, consider any refinement partition M of the mediator infosets in Γ̂, and consider the game Γ̂M created by replacing the mediator’s information partition in Γ̂ withM. Then we make the following definition. Definition 3.6. An (S,M)-certification equilibrium of Γ is a mediator strategy x̂M in Γ̂M such that, in the strategy profile (x̂M, x̂∗1, . . . , x̂ ∗ n), every x ∗ i for i 6= M is a best response. (S,M)-certification equilibria may not exist: indeed, ifM is coarser than the mediator’s original information partition in Γ̂, then the mediator may not have enough information to provide good recommendations under the restrictions of Γ̂. This can be remedied by allowing payments (see Appendix E), or by making the assumption that the mediator at least knows the transcript of the player to whom she is making any nontrivial recommendation: Definition 3.7. A mediator partitionM is direct if, at every mediator decision point (h, τ , REC), so long as |Ah| > 1, the mediator knows the transcript of the player acting at h.M is strongly direct if the mediator also observes the transcript when |Ah| = 1. The condition |Ah| > 1 in the definition allows the mediator to possibly not observe the full information of a player if she does not need to make a nontrivial recommendation to that player. In particular, this allows players to sometimes have information that they only partially reveal to the mediator, so long as the player does not immediately need to act on such information. Coarseness. In literature on correlation, coarseness refers to the restriction that a player must obey any recommendation that she receives (but may choose to deviate by not requesting a recommendation and instead playing any other action). Normal-form coarseness further adds the restriction that players can only choose to deviate at the start of the game—the mediator essentially takes over and plays the game on behalf of non-deviating players. These notions can easily be expressed in terms of our augmented game, therefore also allowing us to express coarse versions of our equilibrium notions as augmented games. 3.3 The gap between polynomial and not polynomial If players cannot send messages to the mediator at all, and the mediator has no other way of gaining any information, we recover the notion of autonomous correlated equilibrium (ACE). It is NP-hard to compute optimal ACE, even in Bayesian games (see e.g., von Stengel and Forges [29]). WhenM is direct and perfect recall, computing an optimal direct (S,M)-certification equilibrium can be done in polynomial time using our framework. When S obeys NRC and M satisfies a stronger condition7, the proof of the revelation principle (Propositions 3.4 and 3.5) works, and the resulting equilibrium is guaranteed to be optimal over all possible equilibria including those that may not be direct. If NRC does not hold, one can still solve the program (1), and the solution is still guaranteed to be an optimal direct equilibrium by Proposition 3.5. However, it is not guaranteed to be optimal over all possible communication structures. Indeed, Green and Laffont [15, Theorem 1] give an instance in which, without NRC, there can be an outcome distribution that is not implementable by a direct mediator. Our program cannot find such an outcome distribution. The counterexample does not preclude the possibility of efficient algorithms for finding optimal certification equilibria in more general cases, but does give intuition for why NRC is crucial to our construction. We could also consider changing the mediator’s information partition so that the mediator does not have perfect recall. This transformation allows us to recover notions of correlation in games. Indeed, if we start from the full-certification equilibrium and only allow the mediator to remember the transcript with the player she is currently talking to, we recover EFCE. Adding coarseness similarly recovers EFCCE and NFCCE. In this setting, the inability to represent the strategy space of an imperfect-recall player may result in the loss of efficient algorithms. 3.4 A family of equilibria By varying 1) what the mediator observes, 2) whether the mediator has perfect recall, 3) whether the players can lie or only withhold information, and 4) when and how players can deviate from the mediator’s recommended actions, we can use our framework to define a family consisting of 16 conceptually different equilibrium notions. More can be generated by considering other variations in this design space, but we focus on the extreme cases in the table. Some of these were already defined in the literature; the remaining names are ours. The result is Table 1. An inclusion diagram for these notions can be found in Appendix G. In the table, ex ante means that players have only a binary choice between deviating (in which case they can play whatever they want) and playing (in which case they must always be direct and obey recommendations). With ex ante deviations, it does not matter whether lying is allowed because we can never get to that stage: either the player deviates immediately and never communicates with the mediator, or the player is direct. If the mediator only remembers the current active player’s information, and players cannot lie, withholding and coarsely deviating are the same. Mediator information advantage means that the mediator always learns the infoset of the current active player, and therefore requires no messages from the players. This is equivalent to forcing players to truthfully report information. A mediator with information advantage may still not have perfect information—for example, it will not know whether a player (or nature) has played an action until some other player observes the action. In this setting, the mediator may also have extra private information (known to none of the players), leading to the setting of Bayesian persuasion [17]. In extensive-form games, there are two different reasonable notions of persuasion: one that stems from extending correlated equilibria, and one that stems from extending communication equilibria. The distinction is that, in the former, the mediator has imperfect recall. For a more in-depth discussion of Bayesian persuasion, see Appendix F. Our framework allows optimal equilibria for all notions in the table to be computed. For perfectrecall mediators, this is possible in polynomial time via the sequence form; for imperfect-recall mediators, the problem is NP-hard, in general, but the team belief DAG of Zhang et al. [32] can be used to recover fixed-parameter algorithms. For the notions of correlated equilibrium, this method results in basically the same LP as the correlation DAG of Zhang et al. [31]. We do not claim that all of these notions are easy to motivate. For example, correlated equilibria are usually arrived at in the “truth known, imperfect recall” setting; the correlated equilibrium notions where lying is allowed are more difficult to motivate in this respect. Further, even the fixedparameter algorithms of Zhang et al. [31] would fail in this setting, because “public states” can no longer be treated as public due to the possibility of lying players. We leave to future research the problem of finding a motivation for the notions that we do not reference elsewhere in the paper. 7Roughly speaking, this condition is that players should not be able to cause the mediator to gain information apart from their own messages by sending messages. It holds for all notions we discuss in this paper. Formalizing the general case is beyond the scope of this paper. 4 Experiments We ran our algorithm for communication and full-certification equilibria on various two-player games, and compared the results to those given by notions of optimal correlation in games. The games used in the experiments are given in Appendix D. All experiments were allocated four CPU cores and 64 GB of RAM. Linear programs were solved with Gurobi 9.5. When payments are used, the allowable payment range is [0,M ] where M is the reward range of the game. Experimental results can be found in Table 2. In the battleship and sheriff instances, there is not a significant difference in performance between finding full-certification equilibria and finding optimal correlated equilibria in terms of performance—this is because, unlike in the general case, optimal correlated equilibria in two-player games without chance can be found in polynomial time [29] anyway. In the ridesharing instances, computing optimal correlated equilibria is much more computationally intensive because the game contains non-public chance actions. Computing optimal full-certification equilibria is comparably easy, and this difference is clearly seen in the timing results. Finding optimal communication equilibria is much more intensive than finding optimal fullcertification equilibria, owing to the quadratic size of the augmented game for communication equilibria. This often causes communication equilibria to be the hardest of the notions to compute in practice, despite optimal correlation being NP-hard. In Figure 1, we have plotted the payoff spaces of some representative instances. The plots show how the polytopes of communication and full-certification equilibria behave relative to correlated equilibria. In the battleship and sheriff instances, the space of communication equilibrium payoffs is a single point, which implies that the space of NFCE (and hence Nash) equilibrium payoffs is also that single point. Unfortunately, that point is the Pareto-least-optimal point in the space of EFCEs. In the ridesharing instances, communication allows higher payoffs. This is because the mediator is allowed to “leak” information between players. 5 Conclusions and future research We have shown that optimal communication and certification equilibria in extensive-form games can be computed via linear programs of polynomial size, or almost-linear size in the full-certification case. We have used our machinery to derive an entire family of equilibrium concepts which we hope to be of use in the future. Possible future directions include the following. 1. Are there efficient online learning dynamics, in any reasonable sense of that term, that converge to certification or communication equilibrium? 2. Is there a better-than-quadratic-size linear program for communication equilibria? 3. Is it possible to extend our augmented game construction to also cover normal-form corre- lated equilibria while maintaining efficiency? 4. Investigate further the comparison between communication and correlation in games. For example, when and why do communication equilibria achieve higher social welfare than extensive-form correlated equilibria? Acknowledgements This material is based on work supported by the National Science Foundation under grants IIS-1901403 and CCF-1733556, and the ARO under award W911NF2010081.
1. What is the focus and contribution of the paper regarding extensive-form games? 2. What are the strengths and weaknesses of the proposed approach, particularly in its originality, quality, clarity, and significance? 3. Do you have any questions or concerns about the paper's content, such as the restriction of the space of messages or the meaning of "truthful" concepts? 4. Are there any limitations to the paper's findings or applications?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies extensive-form games augmented with a mediator that receives and sends messages from and to the agents involved in the game. In the particular case in which only the mediator can send messages and these can be interpreted as action recommendations, such an extended game resembles the one in the definition of correlated equilibria. At the same time, allowing for messages from the agents to the mediator allows to capture different notions of equilibrium, such as the communication equilibrium and the certification equilibrium. By leveraging such an augmented representation, the paper provides a linear programming formulation for finding optimal communication/certification equilibria in n-player extensive-form games in polynomial time. The theoretical results are complemented with an extensive experimental evaluation on a standard testbed of games. Strengths And Weaknesses ORIGINALITY (+) The paper studies equilibrium concepts (communication and certification equilibria) that have never been addressed before by the algorithmic game theory community. (-) The techniques employed by the authors are fairly standard, since they consist in a simple linear programming formulation. QUALITY (+) As far as I am concerned the results are sound. (-) I think that more formalism in proving the technical results is needed. There are only two propositions in the paper, whose proofs given in the appendix are not sufficiently formalized in mathematical terms. Moreover, I think a proof is also need in order to shows the correctness of the linear programming formulation. CLARITY (+) Overall, the paper is well written and easy to follow. (-) There are only some minor problems that I list in the following: (*) Line 28: Cite [24] after “extensive-form”. (*) In paragraph “Applications and related work.”: You should cite the recent works on learning dynamics converging to correlated equilibria in extensive-form games, such as: “No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium” (NeurIPS 2020), “Efficient Deviation Types and Learning for Hindsight Rationality in Extensive-Form Games” (ICML 2021), and “Hindsight and Sequential Rationality of Correlated Play” (AAAI 2021), and others. (*) Line 95: Remove “player”. (*) Line 106: “We also will” is “We will also”. (*) Adding an example of extended game would help. (*) The terms used in the caption of Table 1 do not appear in the table. SIGNIFICANCE (+) The results could be of interest for the part of the NeurIPS community interested in algorithmic game theory. (-) I have some concerns of the significance of the experimental results. It seems to me that the linear programming formulation does not scale well on large games, especially that one for communication equilibrium (which I think is the most interesting solution concept). Questions Can you provide more details as to why you can restrict the space of messages by using the revelation principle (see footnote 3)? I think more details on this are need in the paper. It is not clear to me what you mean by “truthful EFCE” and other “truthful” concepts. Can you provide some more details on this. Limitations Yes.
NIPS
Title Polynomial-Time Optimal Equilibria with a Mediator in Extensive-Form Games Abstract For common notions of correlated equilibrium in extensive-form games, computing an optimal (e.g., welfare-maximizing) equilibrium is NP-hard. Other equilibrium notions—communication [11] and certification [12] equilibria—augment the game with a mediator that has the power to both send and receive messages to and from the players—and, in particular, to remember the messages. In this paper, we investigate both notions in extensive-form games from a computational lens. We show that optimal equilibria in both notions can be computed in polynomial time, the latter under a natural additional assumption known in the literature. Our proof works by constructing a mediator-augmented game of polynomial size that explicitly represents the mediator’s decisions and actions. Our framework allows us to define an entire family of equilibria by varying the mediator’s information partition, the players’ ability to lie, and the players’ ability to deviate. From this perspective, we show that other notions of equilibrium, such as extensive-form correlated equilibrium, correspond to the mediator having imperfect recall. This shows that, at least among all these equilibrium notions, the hardness of computation is driven by the mediator’s imperfect recall. As special cases of our general construction, we recover 1) the polynomial-time algorithm of Conitzer and Sandholm [8] for automated mechanism design in Bayes-Nash equilibria and 2) the correlation DAG algorithm of Zhang et al. [31] for optimal correlation. Our algorithm is especially scalable when the equilibrium notion is what we define as the full-certification equilibrium, where players cannot lie about their information but they can be silent. We back up our theoretical claims with experiments on a suite of standard benchmark games. 1 Introduction Various equilibrium notions in general-sum extensive-form games are used to describe situations where the players have access to a trusted third-party mediator, who can communicate with the players. Depending on the power of the mediator and the form of communication, these notions include the normal-form [1] and extensive-form correlated equilibrium (NFCE and EFCE) [29], the normal-form [25] and extensive-form [10] coarse-correlated equilibrium (NFCCE and EFCCE), the communication equilibrium [11], and the certification equilibrium [12]. Several of these notions, in particular the EFCE and EFCCE, were defined for mainly computational reasons: the EFCE as a computationally-reasonable relaxation to NFCE, and the EFCCE as a computationally-reasonable relaxation of EFCE. When the goal is to compute a single correlated equilibrium, these relaxations are helpful: there are polynomial-time algorithms for computing an EFCE [16]. However, from the perspective of computing optimal equilibria—that is, equilibria that 36th Conference on Neural Information Processing Systems (NeurIPS 2022). maximize the expected value of a given function, such as the social welfare—even these relaxations fall short: for all of the correlation notions above, computing an optimal equilibrium of an extensive-form game is NP-hard [29, 10]. On the other hand, notions of equilibrium involving communication in games have arisen. These differ from the notions of correlation in that the mediator can receive and remember information from the players, and therefore pass information between players as necessary to back up their suggestions. Certification equilibria [12] further strengthen communication equilibria by allowing players to prove certain information to the mediator. To our knowledge, the computational complexity of optimal communication or certification equilibria has never been studied. We do so in this paper. The main technical result of our paper is a polynomial-time algorithm for computing optimal communication and certification equilibria (the latter under a certain natural condition about what messages the players can send). This stands in stark contrast to the notions of correlation discussed above. To prove our main result, we define a general class of mediator-augmented games, each having polynomial size, that is sufficient to describe all of the above notions of equilibrium except the NFCE1. We also build on this main result in several ways. 1. We define the full-certification equilibrium, which is the special case in which players cannot lie to the mediator (but can opt out of revealing their information). In this case, the algorithm is a linear program whose size is almost linear in the size of the original game. As such, this special case scales extremely well compared to all of the other notions. 2. We formalize notions for incorporating payments in the language of our augmented game. By using payments, mediators can incentivize players to play differently than they otherwise would, possibly to the benefit of the mediator’s utility function. 3. We define an entire family of equilibria using our augmented game, that includes as special cases the communication equilibrium, certification equilibrium, NFCCE, EFCCE, and EFCE. From this perspective, we show that other notions of equilibrium, such as extensiveform correlated equilibrium, correspond to the mediator having imperfect recall. This shows that, at least among all these equilibrium notions, the hardness of computation is driven by the mediator’s imperfect recall. We argue that, for this reason, many stated practical applications of correlated equilibria should actually be using communication or certification equilibria instead, which are both easier to compute (in theory, at least) and better at modelling the decision-making process of a rational mediator. 4. We empirically verify the above claims via experiments on a standard set of game instances. Applications and related work. Correlated and communication equilibria have various applications that have been well-documented. Here, we discuss just a few of them, as motivation for our paper. For further discussion of related work, especially relating to automated dynamic mechanism design and persuasion, see Appendix F. Bargaining, negotiation, and conflict resolution [4, 9]. Two parties with asymmetric information wish to arrive at an agreement, say, the price of an item. A mediator, such as a central third-party marketplace, does not know the players’ information but can communicate with the players. Crowdsourcing and ridesharing [13, 22, 31]. A group of players each has individual goals (e.g., to make money by serving customers at specific locations). The players are coordinated by a central party (e.g., a ridesharing company) that has more information than any one of the players, but the players are free to ignore recommendations if they so choose. Persuasion in games [17, 3, 23, 14, 30]. The mediator (in that literature, usually “sender”) has more information than the players (“receivers”), and wishes to tell information to the receivers so as to persuade them to act in a certain way. Automated mechanism design [6, 8, 33, 35, 26, 34, 18, 19]. Players have private information unknown to the mediator. The mediator wishes to commit to a strategy—that is, set a mechanism— such that players are incentivized to honestly reveal their information. In fact, in Appendix E we will see that we recover the polynomial-time Bayes-Nash randomized mechanism design algorithm of [6, 8] as a special case of our main result. 1We do not consider the NFCE, because it breaks our paradigm, which enforces that the mediator’s recommendation be a single action. In NFCE, the whole strategy needs to be revealed upfront. It is an open question whether it is possible to even find one NFCE in polynomial time, not to mention an optimal one. Some of the above examples are often used to motivate correlated equilibria. However, when the mediator is a rational agent with the ability to remember information that it is told and pass the information between players as necessary, we will argue that communication or certification equilibrium should be the notion of choice, for both conceptual and computational reasons. 2 Preliminaries In this section, we discuss background on correlation in extensive-form games. Extensive-form games. An extensive-form game Γ with n players consists of the following. 1. A directed tree of nodes or histories H, whose root is denoted ∅. The depth of the tree will be denoted T . The edges out of nodes are labeled with actions, and the set of such actions will be denoted Ah. Given a node h ∈ H and action a at h, the child reached by following action a at node h is denoted ha. The set of terminal (leaf) nodes inH is denoted Z . Terminal nodes will always be denoted z throughout the paper. 2. A partitionH \ Z = HC tH1 t · · · t . . .Hn of nodes, whereHi is the set of all nodes at which player i plays and playerHC is the set of chance nodes. 3. For each player i, a partition Ii of player i’s decision nodes, Hi, into information sets or infosets. Every node in a given information set I must have the same set of actions, denoted AI . We will call the partition I = I1 t · · · t In the players’ information partition. 4. For each player i, a utility vector ui ∈ [0, 1]Z , where ui[z] denotes the utility achieved by player i at terminal node z. 5. For each chance node h ∈ HC, a probability distribution p(·|h) over the children of h. The sequence σi(h) is the list of infosets reached by player i, and actions taken by the player i at those infosets, on the ∅ → h path, not including the infoset at h itself (if any). We will assume that each player has perfect recall—that is, for each infoset I , the sequence of the player acting at I should be the same for each node in I . We will denote this sequence σ(I). In perfect-recall games, nonempty sequences will be identified by the last infoset-action pair Ia in them. We also will assume that games are timeable and fixed-turn-order, that is, information sets do not span multiple levels of the tree, and all nodes in the same layer of the tree belong to the same player2. We will use the following notation. The relation denotes the natural precedence order induced by the tree H: we write h h′ means that h is an ancestor of h′ (or h = h′), and for sets S, S′, we say S S′ if there are some h ∈ S, h′ ∈ S′ such that h h′. The binary operation ∧ denotes the lowest common ancestor: h ∧ h′ is the lowest node u such that u h, h′. For sequences, σ(h) = (σ1(h), . . . , σn(h)) denotes the joint sequence of all players at node h. N(σ) denotes the set of possible next infosets following sequence σ, that is, N(σ) = {I : σ(I) = σ}. The set Σi denotes the set of sequences of player i, and Σ denotes the set of all sequences across all players (i.e., Σ = tiΣi). A pure strategy for a player i is a selection of one action for each information set I ∈ Ii. A pure profile is a tuple of pure strategies. A correlated profile is a distribution over pure profiles. We will generally work with strategies in realization form (see e.g., Koller et al. [20]). Given a pure strategy x, we say that x plays to z ∈ Z if x plays every action on the ∅ → z path. We will call the vector x ∈ {0, 1}Z the realization form of x. The realization form of a mixed strategy is the appropriate convex combination. The set of mixed strategies forms a convex subset of RZ that, so long as the player has perfect recall, can be expressed using linearly many constraints and variables. We will occasionally need to discuss changing information partitions of Γ. If J = J1 t · · · t Jn is another valid information partition, we will use ΓJ to denote the game Γ with its information partition replaced byJ . We will also occasionally need to talk about multiple games simultaneously; where this is the case, we will mark attributes of the game the same as the game itself. For example, Ĥ is the node set of game Γ̂. 2Timeability is not without loss of generality, but any game for which the precedence order is a partial order over infosets can be converted to a timeable game by adding dummy nodes. Given timeability, fixed-turnorder is without loss of generality, also by adding dummy nodes Communication and certification equilibria. Here, we review definitions related to communication equilibria, following Forges [11] and later related papers. Definition 2.1. Let S be a space of possible messages. A pure mediator strategy is a map d : S≤T → S, where S≤T denotes the set of sequences in S of length at most T . A randomized mediator strategy (hereafter simply mediator strategy) is a distribution over pure mediator strategies. We will assume that the space of possible messages is large, but not exponentially so. In particular, we will assume that {⊥} ∪ I ∪ ⋃ hAh ⊆ S (i.e., messages can at least be nothing, information, or actions)3 and that |S| ≤ poly(|H|). The latter assumption is mostly for cleanliness in stating results: we will give algorithms that need S as an input that we wish to run in time poly(|H|). A mediator strategy augments a game as follows. If the strategy is randomized, it first samples a pure strategy d, which is hidden from the players. At each timestep t, a player reaches a history h at which she must act, and observes the infoset I 3 h. She sends a message st ∈ S to the mediator. The mediator then sends a response d(s1, . . . , st), which depends on the message st as well as the messages sent by all other players prior to timestep t. Then, the player chooses her action a ∈ Ah. We will call the sequence of messages sent and received between the mediator and player i, the transcript with player i. A communication equilibrium4 is a Nash equilibrium of the game Γ augmented with a mediator strategy. The mediator is allowed to perform arbitrary communication with the players. In particular, the mediator is allowed to pass information from one player to another. Further, the players are free to send whatever messages they wish to the mediator, including false or empty messages. These two factors distinguish communication equilibria from notions of correlated equilibria. In Section 3.4 we will discuss this comparison in greater detail. A useful property in the literature on communication equilibria is the revelation principle (e.g., [11]). Informally, the revelation principle states that any outcome achievable by an arbitrary strategy profile can also be achieved by a direct strategy profile, in which the players tell the mediator all their information and are subsequently directly told by the mediator which action to play. In order to be a communication equilibrium, the players still must not have any incentive to deviate from the protocol. That is, the equilibrium must be robust to all messages that a player may attempt to send to the mediator, even if in equilibrium the player always sends the honest message. Forges and Koessler [12] further introduced a form of equilibrium for Bayesian games which they called certification equilibria. In certification equilibria, the messages that a player may legally send are dependent on their information; as such, some messages that a player can send are verifiable. At each information set I ∈ I, let SI ⊆ S denote the set of messages that the player at infoset I may send to the mediator. We will always assume that I ∈ SI and ⊥ ∈ SI for all I . That is, all players always have the options of revealing their true information or revealing nothing. 3 Extensive-form S-certification equilibria The central notion of interest in this paper is a generalization of the notion of certification equilibria [12] to extensive-form games. Definition 3.1. Given an extensive-form game Γ and a family of valid message sets S = {SI : I ∈ I}, an S-certification equilibrium is a Nash equilibrium of the game augmented by a randomized mediator, in which each player at each information set I is restricted to sending a message s ∈ SI . The existence of S-certification equilibria follows from the existence of Nash equilibria, which are the special case where the mediator does nothing. We will need one extra condition on the message sets, which is known as the nested range condition (NRC) [15]: if I ∈ SI′ , then SI ⊆ SI′ . That is, if a player with information I ′ can lie by pretending to have information I , then that player can also emulate any other message she would have been 3A priori, although the messages are given these names, they carry no semantic meaning. The revelation principle is used to assign natural meaning to the messages. 4Previous models of communication in games [11, 12] usually worked with a model in which players send messages, receive messages, and play moves simultaneously, rather than in sequence as in the extensive-game model that we use. The simultaneous-move model is easy to recreate in extensive form: by adding further “dummy nodes” at which players learn information but only have one legal action, we can effectively re-order when players ought to communicate their information to the mediator. able to send at I . Equivalently, the honest message I should be the most certifiable message that a player can send at infoset I . Our main result is the following. Theorem 3.2. Let uM ∈ RZ be an arbitrary utility vector for the mediator. Then there is a polynomial-time algorithm that, given a game Γ and a message set family S satisfying the nested range condition, computes an optimal S-certification equilibrium, that is, one that maximizes Ez uM[z] where the expectation is over playouts of the game under equilibrium. In particular, by setting SI = S for all I , Theorem 3.2 implies that optimal communication equilibria can be computed in polynomial time. The rest of the paper is organized as follows. First, we will prove our main theorem. Along the way, we will demonstrate a form of revelation principle for S-certification equilibria. We will then discuss comparisons to other known forms of equilibrium, including the extensive-form correlated equilibrium [29], and several other natural extensions of our model. Finally, we will show experimental results that compare the computational efficiency and social welfare of various notions of equilibrium on some experimental game instances. 3.1 Proof of Theorem 3.2: The single-deviator mediator-augmented game In this section, we construct a game Γ̂, with n + 1 players, that describes the game Γ where the mediator has been added as an explicit player. This game has similar structure to the one used by Forges [11, Corollary 2], but, critically, has size polynomial in |H|. This is due to two critical differences. First, the players are assumed to either send ⊥, or send messages that mediator cannot immediately prove to be off-equilibrium. In particular, if the player’s last message was I and the mediator recommended action a at I , the player must send a message I ′ with σ(I ′) = Ia. If this is impossible, the player must send ⊥. Therefore, in particular, we will assume that SI consists of only ⊥ and information sets I ′ at the same level as I . Second, only one player is allowed to deviate. Therefore, the strategy of the mediator is not defined in cases where two or more players deviate. We now formalize Γ̂. Nodes in Γ̂ will be identified by tuples (h, τ , r) where h ∈ H is a history in Γ, τ = (τ1, . . . , τn) is the collection of transcripts with all players, and r ∈ {REV, REC, ACT} is a stage marker that denotes whether the current state is one in which a player should be revealing information (REV), the mediator should be recommending a move (REC), or the player should be selecting an action (ACT). The progression of Γ̂ is then defined as follows. We will use the notation τ [i·s] to denote appending message s to τi. • The root node of Γ̂ is (∅, (∅, . . . ,∅), REV). • Nodes (z, τ , REV) for z ∈ Z are also terminal in Γ. The mediator gets utility uM[z], where u is the mediator’s utility function as in Theorem 3.2. All other players i get utility ui[z]. • Nodes (h, τ , REV) for non-terminal h are decision nodes for the player i who acts at h. 1. If i is chance, there is one valid transition, to (h, τ , ACT). 2. If some other player j 6= i has already deviated (i.e., σj(h) 6= τj)), there is one valid transition, to (h, τ [i·I], REC) where I 3 h. 3. If player i has deviated or no one has deviated, then player i observes the infoset I 3 h, and selects a legal message I ′ ∈ SI ∩ ({⊥} ∪N(τi)) to send to the mediator5. Transition to (h, τ [i·I ′], REV). • At (h, τ , REC) where h ∈ Hi, the mediator observes the transcript τi and makes a recommendation a. If τi contains any ⊥ messages, then a = ⊥. Otherwise, a is a legal action a ∈ AI , where I is the most recent message in τi. Transition to (h, τ [i·a], ACT). • Nodes (h, τ , ACT) for non-terminal h are decision nodes for the player i who acts at h. 1. If i is chance, then chance samples a random action a ∼ p(·|h). Transition to (ha, τ , REV). 2. If some other player j 6= i has already deviated, there is one valid transition, to (ha, τ , REC), where a is the action sent by the mediator. 5If τi contains any ⊥ messages, then we take N(τi) = ∅ 3. If player i has deviated or no one has deviated, then player i observes the transcript τi, and selects an action a′ ∈ Ah. Transition to (ha′, τ , REV). The action a′ need not be the recommended action. Since at most one player can ever deviate by construction, and the length of the transcripts are fixed because turn order is common knowledge, the transcripts τ can be identified with sequences σi of the deviated player, if any. We will make this identification: we will use the shorthand hσi to denote the history (h, (σ−i(h), σi), REV), and h⊥ for (h,σ(h), REV) (i.e., no one has deviated yet). Therefore, in particular, this game has at most O(|H||Σ|) histories. For each non-mediator player, there is a well-defined direct strategy x̂∗i for that player: always report her true information I 3 h, and always play the action recommended by the mediator. The goal of the mediator is to find a strategy x̂M for itself that maximizes its expected utility, subject to the constraint that each player’s direct strategy is a best response—that is, find x̂M such that (x̂M, x̂ ∗ 1, . . . , x̂ ∗ n) is a (strong) Stackelberg equilibrium of Γ̂. We claim that finding a mediator strategy x̂M that is a strong Stackelberg equilibrium in Γ̂ is equivalent to finding an optimal S-certification equilibrium in Γ. We prove this in two parts. First, we prove a version of the revelation principle for S-certification equilibria. Definition 3.3. An S-certification equilibrium is direct if it satisfies the following two properties. 1. (Mediator directness) If the transcript τi of a player i is exactly some sequence of player i, and player i sends an infoset I with σ(I) = τi, then the mediator replies with an action a ∈ AI . Otherwise6, the mediator replies ⊥. 2. (Player directness) In equilibrium, players always send their true information I , and, upon receiving an action a ∈ AI , always play that action. Proposition 3.4 (Revelation principle for S-certification equilibria under NRC). Assume that S satisfies the nested range condition. For any S-certification equilibrium, there is a realizationequivalent direct equilibrium. Omitted proofs can be found in the appendix. Since direct mediator strategies are exactly the mediator strategies in Γ̂, and the player strategies are only limited versions of what they are allowed to do in S-certification equilibrium, this implies that, for any S-certification equilibrium, there is a mediator strategy x̂M in Γ̂ such that (x̂M, x̂∗1, . . . , x̂ ∗ n) is a Stackelberg equilibrium. We will also need the converse of this statement. Proposition 3.5. Let x̂M be a strategy for the mediator in Γ̂ such that, in the strategy profile (x̂M, x̂ ∗ 1, . . . , x̂ ∗ n), every x̂ ∗ i for i 6= M is a best response. Then there is an direct S-certification equilibrium that is realization-equivalent to (x̂M, x̂∗1, . . . , x̂ ∗ n). Therefore, we have shown that the mediator strategies x̂M in Γ̂ for which (x̂M, x̂∗1, . . . , x̂ ∗ n) is a Stackelberg equilibrium in Γ̂ correspond exactly to optimal S-certification equilibria of Γ. Such a Stackelberg equilibrium can be found by solving the following program: max x̂M∈X̂M ∑ ẑ∈Ẑ x̂M[ẑ]ûM[ẑ]p̂(ẑ) ∏ i∈[n] x̂∗i [ẑ] s.t. max x̂′j∈X̂j ∑ ẑ∈Ẑ x̂M[ẑ]ûi[ẑ]p̂(ẑ) ( x̂′j [ẑ]− x̂∗j [ẑ] )∏ i 6=j x̂∗i [ẑ] ≤ 0 ∀j ∈ [n] (1) where X̂i is the sequence-form strategy space [20] of player i in Γ̂. The only variables in the program are x̂i for each player i and the mediator. In particular, the direct strategies x̂∗i are constants. Therefore, the objective is a linear function, and the inner maximization constraints are bilinear in x̂M and x̂j . Therefore, this program can be converted to a linear program by dualizing the inner optimizations. For more details on this conversion, see Appendix B. The result is a linear program of size O(n|Ĥ| ) = O(n|H||Σ|). We have thus proved Theorem 3.2. 6This condition is necessary because, if the mediator does not know what infoset the player is in, the mediator may not be able to send the player a valid action, because action sets may differ by infoset. 3.2 Extensions and special cases In this section, we describe several extensions and interesting special cases of our main result. Full-certification equilibria. One particular special case of S-certification equilibria which is particularly useful. We define a full-certification equilibrium as an S-certification equilibrium where SI = {⊥, I}. Intuitively, this means that players cannot lie to the mediator, but they may withhold information. We will call such an equilibrium full-certification. Removing valid messages from the players only reduces their ability to deviate and thus increases the space of possible equilibrium strategies. As such, the full-certification equilibria are the largest class of S-certification equilibria. For full-certification equilibria, the size of game Γ̂ reduces dramatically. Indeed, in all histories hIa of Γ̂, we must have I h. Therefore, we have |Ĥ| ≤ |H|BD where B is the maximum branching factor and D is the depth of the game tree, i.e., the size of Γ̂ goes from essentially quadratic to essentially quasilinear in |H|. The mediator’s decision points in Γ̂ for a full-certification equilibrium are the trigger histories used by Zhang et al. [31] in their analysis of various notions of correlated equilibria. Later, we will draw further connections between full certification and correlation. Changing the mediator’s information. In certain cases, the mediator, in addition to messages that it is sent by the players, also has its own observations about the world. These are trivial to incorporate into our model: simply change the information partition of the mediator in Γ̂ as needed. Alternatively, one can imagine adding a “player”, with no rewards (hence no incentive to deviate), whose sole purpose is to observe information and pass it to the mediator. For purposes of keeping the game small, it is easier to adopt the former method. To this end, consider any refinement partition M of the mediator infosets in Γ̂, and consider the game Γ̂M created by replacing the mediator’s information partition in Γ̂ withM. Then we make the following definition. Definition 3.6. An (S,M)-certification equilibrium of Γ is a mediator strategy x̂M in Γ̂M such that, in the strategy profile (x̂M, x̂∗1, . . . , x̂ ∗ n), every x ∗ i for i 6= M is a best response. (S,M)-certification equilibria may not exist: indeed, ifM is coarser than the mediator’s original information partition in Γ̂, then the mediator may not have enough information to provide good recommendations under the restrictions of Γ̂. This can be remedied by allowing payments (see Appendix E), or by making the assumption that the mediator at least knows the transcript of the player to whom she is making any nontrivial recommendation: Definition 3.7. A mediator partitionM is direct if, at every mediator decision point (h, τ , REC), so long as |Ah| > 1, the mediator knows the transcript of the player acting at h.M is strongly direct if the mediator also observes the transcript when |Ah| = 1. The condition |Ah| > 1 in the definition allows the mediator to possibly not observe the full information of a player if she does not need to make a nontrivial recommendation to that player. In particular, this allows players to sometimes have information that they only partially reveal to the mediator, so long as the player does not immediately need to act on such information. Coarseness. In literature on correlation, coarseness refers to the restriction that a player must obey any recommendation that she receives (but may choose to deviate by not requesting a recommendation and instead playing any other action). Normal-form coarseness further adds the restriction that players can only choose to deviate at the start of the game—the mediator essentially takes over and plays the game on behalf of non-deviating players. These notions can easily be expressed in terms of our augmented game, therefore also allowing us to express coarse versions of our equilibrium notions as augmented games. 3.3 The gap between polynomial and not polynomial If players cannot send messages to the mediator at all, and the mediator has no other way of gaining any information, we recover the notion of autonomous correlated equilibrium (ACE). It is NP-hard to compute optimal ACE, even in Bayesian games (see e.g., von Stengel and Forges [29]). WhenM is direct and perfect recall, computing an optimal direct (S,M)-certification equilibrium can be done in polynomial time using our framework. When S obeys NRC and M satisfies a stronger condition7, the proof of the revelation principle (Propositions 3.4 and 3.5) works, and the resulting equilibrium is guaranteed to be optimal over all possible equilibria including those that may not be direct. If NRC does not hold, one can still solve the program (1), and the solution is still guaranteed to be an optimal direct equilibrium by Proposition 3.5. However, it is not guaranteed to be optimal over all possible communication structures. Indeed, Green and Laffont [15, Theorem 1] give an instance in which, without NRC, there can be an outcome distribution that is not implementable by a direct mediator. Our program cannot find such an outcome distribution. The counterexample does not preclude the possibility of efficient algorithms for finding optimal certification equilibria in more general cases, but does give intuition for why NRC is crucial to our construction. We could also consider changing the mediator’s information partition so that the mediator does not have perfect recall. This transformation allows us to recover notions of correlation in games. Indeed, if we start from the full-certification equilibrium and only allow the mediator to remember the transcript with the player she is currently talking to, we recover EFCE. Adding coarseness similarly recovers EFCCE and NFCCE. In this setting, the inability to represent the strategy space of an imperfect-recall player may result in the loss of efficient algorithms. 3.4 A family of equilibria By varying 1) what the mediator observes, 2) whether the mediator has perfect recall, 3) whether the players can lie or only withhold information, and 4) when and how players can deviate from the mediator’s recommended actions, we can use our framework to define a family consisting of 16 conceptually different equilibrium notions. More can be generated by considering other variations in this design space, but we focus on the extreme cases in the table. Some of these were already defined in the literature; the remaining names are ours. The result is Table 1. An inclusion diagram for these notions can be found in Appendix G. In the table, ex ante means that players have only a binary choice between deviating (in which case they can play whatever they want) and playing (in which case they must always be direct and obey recommendations). With ex ante deviations, it does not matter whether lying is allowed because we can never get to that stage: either the player deviates immediately and never communicates with the mediator, or the player is direct. If the mediator only remembers the current active player’s information, and players cannot lie, withholding and coarsely deviating are the same. Mediator information advantage means that the mediator always learns the infoset of the current active player, and therefore requires no messages from the players. This is equivalent to forcing players to truthfully report information. A mediator with information advantage may still not have perfect information—for example, it will not know whether a player (or nature) has played an action until some other player observes the action. In this setting, the mediator may also have extra private information (known to none of the players), leading to the setting of Bayesian persuasion [17]. In extensive-form games, there are two different reasonable notions of persuasion: one that stems from extending correlated equilibria, and one that stems from extending communication equilibria. The distinction is that, in the former, the mediator has imperfect recall. For a more in-depth discussion of Bayesian persuasion, see Appendix F. Our framework allows optimal equilibria for all notions in the table to be computed. For perfectrecall mediators, this is possible in polynomial time via the sequence form; for imperfect-recall mediators, the problem is NP-hard, in general, but the team belief DAG of Zhang et al. [32] can be used to recover fixed-parameter algorithms. For the notions of correlated equilibrium, this method results in basically the same LP as the correlation DAG of Zhang et al. [31]. We do not claim that all of these notions are easy to motivate. For example, correlated equilibria are usually arrived at in the “truth known, imperfect recall” setting; the correlated equilibrium notions where lying is allowed are more difficult to motivate in this respect. Further, even the fixedparameter algorithms of Zhang et al. [31] would fail in this setting, because “public states” can no longer be treated as public due to the possibility of lying players. We leave to future research the problem of finding a motivation for the notions that we do not reference elsewhere in the paper. 7Roughly speaking, this condition is that players should not be able to cause the mediator to gain information apart from their own messages by sending messages. It holds for all notions we discuss in this paper. Formalizing the general case is beyond the scope of this paper. 4 Experiments We ran our algorithm for communication and full-certification equilibria on various two-player games, and compared the results to those given by notions of optimal correlation in games. The games used in the experiments are given in Appendix D. All experiments were allocated four CPU cores and 64 GB of RAM. Linear programs were solved with Gurobi 9.5. When payments are used, the allowable payment range is [0,M ] where M is the reward range of the game. Experimental results can be found in Table 2. In the battleship and sheriff instances, there is not a significant difference in performance between finding full-certification equilibria and finding optimal correlated equilibria in terms of performance—this is because, unlike in the general case, optimal correlated equilibria in two-player games without chance can be found in polynomial time [29] anyway. In the ridesharing instances, computing optimal correlated equilibria is much more computationally intensive because the game contains non-public chance actions. Computing optimal full-certification equilibria is comparably easy, and this difference is clearly seen in the timing results. Finding optimal communication equilibria is much more intensive than finding optimal fullcertification equilibria, owing to the quadratic size of the augmented game for communication equilibria. This often causes communication equilibria to be the hardest of the notions to compute in practice, despite optimal correlation being NP-hard. In Figure 1, we have plotted the payoff spaces of some representative instances. The plots show how the polytopes of communication and full-certification equilibria behave relative to correlated equilibria. In the battleship and sheriff instances, the space of communication equilibrium payoffs is a single point, which implies that the space of NFCE (and hence Nash) equilibrium payoffs is also that single point. Unfortunately, that point is the Pareto-least-optimal point in the space of EFCEs. In the ridesharing instances, communication allows higher payoffs. This is because the mediator is allowed to “leak” information between players. 5 Conclusions and future research We have shown that optimal communication and certification equilibria in extensive-form games can be computed via linear programs of polynomial size, or almost-linear size in the full-certification case. We have used our machinery to derive an entire family of equilibrium concepts which we hope to be of use in the future. Possible future directions include the following. 1. Are there efficient online learning dynamics, in any reasonable sense of that term, that converge to certification or communication equilibrium? 2. Is there a better-than-quadratic-size linear program for communication equilibria? 3. Is it possible to extend our augmented game construction to also cover normal-form corre- lated equilibria while maintaining efficiency? 4. Investigate further the comparison between communication and correlation in games. For example, when and why do communication equilibria achieve higher social welfare than extensive-form correlated equilibria? Acknowledgements This material is based on work supported by the National Science Foundation under grants IIS-1901403 and CCF-1733556, and the ARO under award W911NF2010081.
1. What is the focus and contribution of the paper regarding extensive-form games? 2. What are the strengths of the proposed approach, particularly in terms of its originality and technical soundness? 3. What are the weaknesses of the paper regarding its lack of attention to real-world systems and limited discussion of weaknesses? 4. How does the reviewer assess the clarity and significance of the paper's content? 5. Are there any concerns regarding the limitations of the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies the problem of computing optimal equilibria in extensive-form games. Standard equilibrium notions are hard (i.e., NP-hard) to optimize (e.g., for revenue or social welfare) in extensive-form games. The authors study communication and certification equilibria, showing that they can be optimized in polynomial time. These arise when a mediator is present who can send and receive messages from the players. They build additional results on top of this main result, namely: A class of equilibria where the players cannot reveal false information, leading to a linear size linear program (scales better than other notions). They consider the integration of payments, allowing the mediator a more expressive way to modify the outcome in settings where they are permitted. They define a family of equilibria from the main result that correspond to different tweaks to the setting, e.g., imperfect recall for moderator. They recover existing algorithms for equilibrium computation as special cases of their general method. Strengths And Weaknesses Originality: They are the first to show that communication and certification equilibria can be optimized in polynomial time for extensive-form games. The paper lays out a map of equilibrium concepts and shows that many can be computed through the same meta-algorithm. Coverage of related work seems adequate. Quality: There is a substantial amount of math I did not check, but the work appears technically sound. It's a theory paper, so not a lot of attention is spent on thinking about potentially real systems. I found myself wondering at some points wishing for a more direct discussion of motivation. There is not a lot of discussion of weaknesses. The authors' claims seem well supported. Clarity: It's math-heavy. The notation was explained pretty clearly and a substantial chunk of the paper is spent doing so. I would prefer more pictures or examples to help explain (perhaps in the appendix due to space constraints). The authors say they will not release code, which I don't understand. Significance: The results seem like they could be useful in some real-world contexts. In any case, it is good to know that such tractable families of equilibria exist. The generalization of past algorithms provides a bit of interesting big picture structure. Questions N/A Limitations Negative social impact is not a concern here, in my opinion. There is little explicit discussion of limitatins.
NIPS
Title Objective and efficient inference for couplings in neuronal networks Abstract Inferring directional couplings from the spike data of networks is desired in various scientific fields such as neuroscience. Here, we apply a recently proposed objective procedure to the spike data obtained from the Hodgkin–Huxley type models and in vitro neuronal networks cultured in a circular structure. As a result, we succeed in reconstructing synaptic connections accurately from the evoked activity as well as the spontaneous one. To obtain the results, we invent an analytic formula approximately implementing a method of screening relevant couplings. This significantly reduces the computational cost of the screening method employed in the proposed objective procedure, making it possible to treat large-size systems as in this study. 1 Introduction Recent advances in experimental techniques make it possible to simultaneously record the activity of multiple units. In neuroscience, multi-electrodes and optical imaging techniques capture largescale behaviors of neuronal networks, which facilitate a deeper understanding of the information processing mechanism of nervous systems beyond the single neuron level [1-6]. This preferable situation, however, involves technical issues in dealing with such datasets because they usually consist of a large amount of high-dimensional data which are difficult to be handled by naive usages of conventional statistical methods. A statistical-physics-based approach for tackling these issues was presented using the Ising model [7]. Although the justification to use the Ising model for analyzing neuronal systems is not completely clear [8,9,10], its performance was empirically demonstrated [7], which triggered further applications [11-22]. An advantage of using the Ising model is that several analytical techniques for inverse problems are available [23-29], which allows us to infer couplings between neurons with a feasible computational cost. Another advantage is that it is straightforward to introduce variants of the model. Beyond the conventional data analysis, an important variant is the kinetic Ising model, which is more suitable to take into account the correlations in time, since this extended model removes the symmetric-coupling constraint of the Ising model. A useful mean-field (MF) inverse formula for the kinetic Ising model has been presented in [25,26]. Two problems arise when treating neuronal systems’ data in the framework of the Ising models. The first problem is how to determine an appropriate size of time bins when discretizing original signals in time; the appropriate size differs from the intrinsic time-scale of the original neuronal sys- 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. tems because the Ising models are regarded as a coarse-grained description of the original systems. Hence, the way of the transformation to the models of this type is nontrivial. The second problem is extracting relevant couplings from the solution of the inverse problem; unavoidable noises in experimental data contaminate the inferred couplings, and hence, we need to screen the relevant ones among them. In a previous study [30], an information-theoretic method and a computational-statistical technique were proposed for resolving the aforementioned first and second problems, respectively. Those methods were validated in two cases: in a numerical simulation based on the Izhikevich models and in analyzing in vitro neuronal networks. The result is surprisingly good: their synaptic connections are reconstructed with fairly high accuracy. This finding motivates us to further examine the methods proposed in [30]. Based on this motivation, this study applies these methods to the data from the Hodgkin–Huxley model, which describes the firing dynamics of a biological neuron more accurately than the Izhikevich model. Further, we examine the situation where responses of neuronal networks are evoked by external stimuli. We implement this situation both in the Hodgkin–Huxley model and in a cultured neuronal network of a previously described design [31], and test the methods in both the cases. Besides, based on the previously described MF formula of [25,26], we derive an efficient formula implementing the previous method of screening relevant couplings within a significantly smaller computational cost. In practice, the naive implementation of the screening method is computationally expensive, and can be a bottleneck when applied to large-scale networks. Hence, we exploit the simplicity of the model, and use the advanced statistical processing with reasonable time in this work. Below, we address those three points by employing the simple kinetic Ising model, to efficiently infer synaptic couplings in neuronal networks. 2 Inference procedure The kinetic Ising model consists of N units, {si}Ni=1, and each unit takes bipolar values as si(t) = ±1. Its dynamics is governed by the so-called Glauber dynamics: P (s(t+ 1)|s(t); {Jij , θi(t)}) = N∏ i=1 exp [si(t+ 1)Hi(t; {Jij , θi(t)})] exp [Hi(t; {Jij , θi(t)})] + exp [−Hi(t; {Jij , θi(t)})] , (1) where Hi(t) is the effective field, defined as Hi(t) = θi(t) + ∑N j=1 Jijsj(t), θi(t) is the external force, and Jij is the coupling strength from j to i. This model also corresponds to a generalized McCulloch–Pitts model in theoretical neuroscience and logistic regression in statistics. When applying this to spike train data, we regard the state si(t) = 1 (-1) as the firing (non-firing) state. The inference framework we adopt here is the standard maximum-likelihood (ML) framework. We repeat R experiments and denote a firing pattern {s∗ir(t)}Ni=1 for t = 1, 2, · · · ,M in an experiment r(= 1, 2, · · · , R). The ML framework requires us to solve the following maximization problem on the variable set {Jij , θi(t)}: {Ĵij , θ̂i(t)} = argmax {Jij ,θi(t)} { 1 R R∑ r=1 M∑ t=1 logP (s∗r(t+ 1)| s∗r(t); {Jij , θi(t)}) } . (2) This cost function is concave with respect to {Jij , θi(t)}, and hence, a number of efficient solvers are available [32]. However, we do not directly maximize eq. (2) in this study but instead we employ the MF formula proposed previously [25,26]. The MF formula is reasonable in terms of the computational cost and sufficiently accurate when the dataset size R is large. Moreover, the availability of an analytic formula enables us to construct an effective approximation to reduce the computational cost in the post-processing step, as shown in Sec. 2.3. Unfortunately, in many experimental settings, it is not easy to conduct a sufficient number of independent experiments [33,34], as in the case of Sec. 4. Hence, below we assume the stationarity of any statistics, and ignore the time dependence of θ(t). This allows us to identify the average over time as the ensemble average, which significantly improves statistics. We admit this assumption is not always valid, particularly in the case where time-dependent external forces are present, although we treat such cases in Sec. 3.2 and Sec. 4.2. Despite this limitation, we still stress that the present approach can extract synaptic connections among neurons accurately, although the existence of the time-dependent inputs may decrease its performance. Possible directions to overcome this limitation are discussed in Sec. 5. 2.1 Pre-processing: Discretization of time and binarization of state In the pre-processing step, we have to decide the duration of the interval that should be used to transform the real time to the unit time ∆τ in the Ising scheme. We term ∆τ the bin size. Once the bin size is determined, the whole real time interval [0, T ] is divided into the set of time bins that are labelled as {t}M=T /∆τt=1 . Given this set of the time bins, we binarize the neuron states: if there is no spike train of the neuron i in the time bin with a label t, then s∗i (t) = −1; otherwise s∗i (t) = 1. This is the whole pre-processing step we adopt, and is a commonly used approach [7]. Determination of the bin size ∆τ can be a crucial issue: different values of ∆τ may lead to different results. To determine it in an objective way, we employ an information-theory-based method proposed previously [30]. Following this method, we determine the bin size as ∆τopt = argmax ∆τ ( T ∆τ − 1 )∑ i̸=j Î∆τ (si(t+ 1); sj(t)) , (3) where I∆τ (si(t+ 1); sj(t)) denotes the mutual information between si(t + 1) and sj(t) in the coarse-grained series with ∆τ , and Î∆τ (si(t+ 1); sj(t)) is its plug-in estimator. The explicit formula is Î∆τ (si(t+ 1); sj(t)) = ∑ (α,β)∈{+,−}2 rαβ(i, t+ 1; j, t) log rαβ(i, t+ 1; j, t) rα(i, t+ 1)rβ(j, t) , (4) where r++(i, t + 1; j, t) denotes the realized ratio of the pattern (si(t + 1), sj(t)) = (+1,+1), r++(i, t+1; j, t) ≡ (1/(M − 1))#{(si(t+1), sj(t)) = (+1,+1)}, and the other double-subscript quantities {r+−, r−+, r−−} are defined similarly. Single-subscript quantities are also the realized ratios of the corresponding state, for example, r+(j, t) ≡ (1/M)#{sj(t) = +1}. The meaning of eq. (3) is clear: the formula inside the brace brackets of the right-hand side, hereafter termed gross mutual information, is merely the likelihood of a (null) hypothesis that si(t + 1) and sj(t) are firing without any correlation. The optimal value ∆τopt is chosen to reject this hypothesis most strongly. This can also be regarded as a generalization of the chi-square test. 2.2 Inference algorithm: The MF formula The previously derived MF formula [25,26] is given by ĴMF = A−1DC−1, (5) where µi(t) = ⟨si(t)⟩ , Aij(t) = ( 1− µ2i (t) ) δij , Cij(t) = ⟨si(t)sj(t)⟩ − µi(t)µj(t), Dij(t) = ⟨si(t+ 1)sj(t)⟩ − µi(t+ 1)µj(t). (6) Note that the estimate ĴMF seemingly depends on time, but it is known that the time dependence is very weak and ignorable. Once given ĴMF, the MF estimate of the external field is given as θ̂MFi (t) = tanh −1 (µi(t+ 1))− ∑ j ĴMFij µj(t), (7) although we focus on the couplings between neurons and do not estimate the external force in this study. The literal meaning of the brackets is the ensemble average corresponding to (1/R) ∑R r=1 in eq. (2), but here we identify it as the average over time. Here, we use the time-averaged statistics of {µ, C,D,θ}, as declared above. 2.3 Post-processing: Screening relevant couplings and its fast approximation The basic idea of our screening method is to compare the coupling estimated from the original data with the one estimated from randomized data in which the time series of firing patterns of each neuron is randomly independently permuted. We do not explain the detailed procedures here because similar methods have been described previously [7,30]. Instead, here we state the essential point of the method and derive an approximate formula implementing the screening method in a computationally efficient manner. The key of the method is to compute the probability distribution of Ĵij , P (Ĵij), when applying our inference algorithm to the randomized data. Once we obtain the probability distribution, we can judge how unlikely our original estimate is as compared to the estimates from the randomized data. If the original estimate is sufficiently unlikely, we accept it as a relevant coupling; otherwise, we reject it. Evaluation of the above probability distribution is not easy in general, and hence, it is common to have recourse to numerical sampling, which can be a computational burden. Here, we avoid this problem by computing it in an analytical manner under a reasonable approximation. For the randomized data, we may assume that two neurons si and sj fire independently with fixed means µi and µj , respectively. Under this assumption, by the central limit theorem, each diagonal component of C converges to Cii = 1 − µ2i = Aii, while its non-diagonal component becomes a zero-mean Gaussian variable whose variance is proportional to 1/(M − 1), and is thus, small. All the components of D behave similarly to the non-diagonal ones of C. This consideration leads to the expression Ĵ ranij = ∑ k (A−1)iiDik(C −1)kj ≈ (A−1)iiDij(A−1)jj = 1 (1− µ2i )(1− µ2j ) Dij . (8) By the independence between si and sj , the variance of Dij becomes (1 − µ2i )(1 − µ2j )/(M − 1). Hence the probability P ( |Ĵ ranij | ≥ Φth ) is obtained as P ( |Ĵ ranij | ≥ Φth ) ≈ 1− erf Φth √ (1− µ2i )(1− µ2j )(M − 1) 2 , (9) where erf(x) is the error function defined as erf(x) ≡ 2√ π ∫ x 0 dy e−y 2 . (10) Inserting the absolute value of the original estimate of Ĵij in Φth, we obtain its likelihood, and can judge whether it should be accepted. Below, we set the significance level pth associated with (Φth)ij as (Φth)ij = √ 2 (1− µ2i )(1− µ2j )(M − 1) erf−1 (1− pth) (11) and accept only Ĵij such that |Ĵij | > (Φth)ij . 3 Hodgkin–Huxley networks We first evaluate the accuracy of our methods using synthetic systems consisting of the Hodgkin– Huxley neurons. The dynamics of the neurons are given by C dVi dτ = −ḡKn4i (Vi − EK)− ḡNam3ihi (Vi − ENa)− ḡL (Vi − EL) + Iexi , (12) dni dτ = αn (Vi) (1− ni)− βn (Vi)ni, (13) dmi dτ = αm (Vi) (1−mi)− βm (Vi)mi, (14) dhi dτ = αh (Vi) (1− hi)− βh (Vi)hi, (15) where Vi is the membrane potential of ith neuron, ni is the activation variable that represents the ratio of the open channels for K+ ion, and mi and hi are the activation and inactivation variables for Na+ ion, respectively. All parameters, except the external input term Iexi , are set as described in [35]. The input forces are given by Iexi = ci(τ) + N∑ j=1 KijVjΘ(Vj − Vth) + a ∑ k δ ( τ − τki ) , (16) where ci(t) represents the environmental noise with a Poisson process, the second term represents the couplings with the threshold voltage Vth = 30mV and the Heaviside step function Θ(·), and the last term denotes the impulse stimulations with the delta function. Here, we consider no-delay simple couplings, which we term the synaptic connections, and aim to reconstruct their structure with the excitatory/inhibitory signs using our methods. We use N = 100 neuron networks, where the 90 neurons are excitatory and have positive outgoing couplings while the others are inhibitory. The rate and strength of the Poisson process are set as λ = 180Hz and b = 2mV, respectively, for all neurons. We generate their time series, integrating (12)-(15) by the Euler method with dτ = 0.01ms, where we suppose a neuron is firing when its voltage exceeds Vth, and use the spike train data with the whole period T = 106 ms for our inference. 3.1 Spontaneous activity case At first, we consider a system on a chain network in which each neuron has three synaptic connections to adjoint neurons in one direction. The connection strength Kij is drawn from the uniform distributions in [0.015, 0.03] for the excitatory and in [−0.06,−0.03] for the inhibitory neurons, respectively. Here, we set a = 0mV to study the spontaneous activity. An example of the spike trains generated during 3 seconds is shown in Fig. 1 (a), where the spike times and corresponding neuronal indices are plotted. Subsequently, using the whole spike train data, we calculate the gross mutual information for different ∆τ , and the result is indicated by the red curve in Fig. 1 (b). The curve has the unimodal feature, which implies the existence of the optimal time bin size of approximately ∆τ = 3ms, although the original system does not have the delay. We suppose that inputs must accumulate sufficiently to generate a spike, which costs some time scale, and this is a possible reason for the emergence of the nontrivial time-scale. To validate our approximation (8), we randomize the coarse-grained series with ∆τ = 3ms in the time direction independently, rescale Ĵ ranij by multiplying √ (1− µ2i )(1− µ2j )(M − 1), and compare the results of 1000 randomized data with the standard Gauss distribution in Fig. 1 (c), which shows their good correspondence. Using ∆τ = 3ms to make the spike trains coarse-grained, we apply the inverse formula to the series and screen relevant couplings with pth = 10−3, which leads to the estimated coupling matrix shown in Fig. 1 (e), while the one used to generate the data is shown in Fig. 1 (d). The asymmetric network structure is recovered sufficiently with the discrimination of the signs of the couplings. The conditional ratios of the correctness are shown in Fig. 1 (f), where the inference results obtained with different values of ∆τ are also shown. This demonstrates the fairly accurate reconstruction result obtained using our inference procedure. We also show the receiver operating characteristic (ROC) curves obtained by gradually changing the value pth in Fig. 1 (g), with the different values of ∆τ . We conclude that using non-optimal time bins drastically decreases the accuracy of the inference results. To illustrate the robustness of the optimality of the time bin, in Fig. 1 (i) we plot the means and standard deviations of the gross mutual information through the 10 different simulations, showing that the variance is small enough and the result is well robust. To consider a more general situation, we also employ a Hodgkin–Huxley system on a random network. The directional synaptic connection between every pair of neurons is generated with the probability 0.1, and the excitatory and inhibitory couplings are drawn from the uniform distributions within [0.01, 0.02] and [−0.04,−0.02], respectively. The corresponding inference results for its spontaneous activity are shown by green curves in Figs. 1 (b) and (f). The ROC curves for the three different three values of ∆τ are also shown in (h). We confirm that the inference is sufficiently effective in the random-network system as well as in the chain system. 3.2 Evoked activity case We next investigate performance in systems where responses are evoked by impulse stimuli. The model parameters, except for a, are the same as those in the chain model in Sec. 3.1. The strength of the external force is set as a = 5.3mV, and the stimulations are injected to all neurons with interval 1 s. In Fig. 2 (a) we show the spike trains, where we observe that most of the neurons fire at the injection times τ = 0.5, 1.5, 2.5 s. The gross mutual information against ∆τ is shown in Fig. 2 (b). Although the curve feature is modified due to the existence of the impulse inputs, we observe that its peak is located at a similar value of ∆τ . Therefore, we use the same value ∆τ = 3ms. Applying our inference procedure with ∆τ = 3ms and pth = 10−3, we obtain the inferred couplings which are shown in Fig. 2 (c), where the original network is in Fig. 1 (d). On comparing Fig. 2 (c) with Fig. 1 (e), while the inference detects the existence of the synaptic connections, we observe more false couplings in the evoked case. The conditional ratios in Fig. 2 (d) indicate that the existence of the external inputs may increase the false positive rate with the same pth. The ROC curves are shown in Fig. 2 (f). 4 Cultured neuronal networks We apply our inference methods to the real neuronal systems introduced in a previous study [31], where rat cortical neurons were cultured in micro wells. The wells had a circular structure, and consequently the synapses of the neurons were likely to form a physically asymmetric chain network, which is similar to the situation in the Hodgkin–Huxley models we used in Sec. 3. The activity of the neurons was recorded by the multi-electrode array with 40µs time resolution, and the Efficient Technology of Spike sorting method [36] was used to identify the spike events of individual neurons. We study the spontaneous and evoked activities here. 4.1 Spontaneous activity case We first use the spontaneous activity data recorded during 120 s. The spike sorting identified 100 neurons which generated the spikes. The spike raster plot during 3 seconds is displayed in Fig. 3 (a). We calculate the gross mutual information as in case of the Hodgkin–Huxley models, and the obtained optimal bin size is approximately ∆τ = 5ms. We also confirm that the inferred couplings are similar to the results described previously [30], and this supports the validity of our novel approximation method introduced in Sec. 2.3. We show the inferred network in Figs. 3 (bd) with different values pth = 10−3, 10−6, 10−9, where we locate the nodes denoting the neurons on a circle following the experimental design [31]. A more strict threshold provides us with clear demonstration of the relevant couplings here. 4.2 Evoked activity case We next study an evoked neuronal system, where an electrical pulse stimulation is injected from an electrode after every 3 seconds, and the other experimental settings are similar to those of the spontaneous case. In this case the activity of 149 neurons were identified by the spike sorting. The example of the spike trains is shown in Fig. 4 (a). The gross mutual information is shown in Fig. 4 (b), where we can see the peak around ∆τ = 10ms. Setting ∆τ = 10ms and pth = 10−3, 10−6, we obtain the estimated coupling matrices in Figs. 4 (c,d). In these cases, we can also observe the bold diagonal elements representing the asymmetric chain structure, although with the lower significant level some far-diagonal elements emerge due to the existence of the external inputs, which is a situation similar to that in the Hodgkin–Huxley simulation in Sec. 3.2. The inferred network with the strict threshold pth = 10−9 is displayed in Fig. 4 (e), where some long-range couplings are still estimated while physical connections corresponding to them do not exist because of the experimental design. 5 Conclusion and discussion We propose a systematic inference procedure for extracting couplings from point-process data. The contribution of this study is three-fold: (i) invention of an analytic formula to screen relevant couplings in a computationally efficient manner; (ii) examination in the Hodgkin–Huxley model, with and without impulse stimuli; (iii) examination in an evoked cultured neuronal network. The applications to the synthetic data, with and without the impulse stimuli, demonstrate the fairly accurate reconstructions of synaptic connections by our inference methods. The application to the real data of the spontaneous activity in the cultured neuronal system also highlights the effectiveness of the proposed methods in detecting the synaptic connections. From the comparison between the analyses of the spontaneous and evoked activities, we found that the inference accuracy becomes degraded by the external stimuli. One of the potential origins is the breaking of our stationary assumption of the statistics {µ, C,D} because of the time-varying external force θ. To overcome this, certain techniques resolving the insufficiency of samples, such as regularization, will be helpful. A promising approach might be the introduction of an ℓ1 regularization into eq. (2), which enables us to automatically screen out irrelevant couplings. Comparing it with the present approach based on computational statistics will be an interesting future work. Acknowledgments This work was supported by MEXT KAKENHI Grant Numbers 17H00764 (YT, TO, and YK) and 18K11463 (TO), and RIKEN Center for Brain Science (YT and TI).
1. What is the main contribution of the paper regarding neural couplings and functional connectivity? 2. What are the strengths of the proposed approach, particularly in terms of experiments and low computational burden? 3. What are the limitations or weaknesses of the paper, such as the focus on a single topology and the choice of using hypothesis testing for binning? 4. Are there any suggestions for improving the paper, such as testing and reporting on more topologies and avoiding subjective language? 5. Do you have any questions about the results or analysis presented in the paper, such as patterns in Figure 4 or the distribution of error/noise?
Review
Review SUMMARY OF THE PAPER Dealing with a relevant research problem -reconstruction of synaptic connections-; The authors put forth a new approximation to estimate neural couplings from spike data with implications for the analysis of functional connectivity. It took me a bit to understand what was the intended goal (aim or hypothesis never stated explicitly and contributions are declared at the end), but as soon as the intention was clear, the rest of the paper reads quite well. In their approximation, the authors make some assumptions (e.g. time invariance) are strict but otherwise acceptable and reasoned, and if I may, I would proceed just like the authors. The simulations on synthetic data are very informative and the results from the cultured cells gives a good idea of some of the practical limitations of the approach. I’m not sure I share the idea that some coactivity threshold should be based on a significance test, but then, it is not that I have a better recommendation or harder facts, so this latter it is only my humble opinion. STRONG POINTS • Good and well explained experiments, both on synthetic data and cultured cells covering both spontaneous and evoked activity • Acceptable and well reasons decisions whenever there has been one • The contributed approximation for analysing neural couplings is an important contribution that (I understand) have low computational burden. WEAK POINTS • Test on single topology • At times, the author seems to favour mathematical decisions over biological dictum (e.g. why should the optimal binning be based on hypothesis testing and not on some physiological guidance?). • Criticism of models such as the Ising models seem more concerned on justifying the authors choice than actually about pointing out a genuine limitation for the model and analysis SUGGESTIONS TO (PERHAPS) IMPROVE THE PAPER Major • Well, mostly the first two afore stated as weak points above, and yet none is a critical impediment for acceptance; testing and reporting on more topologies will make the paper too dense, and my concerns on favouring some mathematical commodities over phenomenological construct are understandable within the specific frame of the proposal. I’m happy leaving the attendance of this two to the authors’ criterium. Minor • Avoid adjectives (e.g. difficult, naïve, etc) and whenever possible rely on objective (preferably quantitative) measures. • Line 230: Any patterns that the authors may appreciate on Fig 4? For instance, is there any links corresponding to neurons “seeing” the spike train at a particular lag? Any preferred anisotropy direction in the transfer of information across neurons worth mentioning? Does the distribution of error/noise follows any known distribution? • If not critical for the authors purposes and writing styles, I would suggest moving the contributions the introduction and perhaps declared explicitly the goal (providing a new approximation affording some specific benefit) from the abstract.
NIPS
Title Objective and efficient inference for couplings in neuronal networks Abstract Inferring directional couplings from the spike data of networks is desired in various scientific fields such as neuroscience. Here, we apply a recently proposed objective procedure to the spike data obtained from the Hodgkin–Huxley type models and in vitro neuronal networks cultured in a circular structure. As a result, we succeed in reconstructing synaptic connections accurately from the evoked activity as well as the spontaneous one. To obtain the results, we invent an analytic formula approximately implementing a method of screening relevant couplings. This significantly reduces the computational cost of the screening method employed in the proposed objective procedure, making it possible to treat large-size systems as in this study. 1 Introduction Recent advances in experimental techniques make it possible to simultaneously record the activity of multiple units. In neuroscience, multi-electrodes and optical imaging techniques capture largescale behaviors of neuronal networks, which facilitate a deeper understanding of the information processing mechanism of nervous systems beyond the single neuron level [1-6]. This preferable situation, however, involves technical issues in dealing with such datasets because they usually consist of a large amount of high-dimensional data which are difficult to be handled by naive usages of conventional statistical methods. A statistical-physics-based approach for tackling these issues was presented using the Ising model [7]. Although the justification to use the Ising model for analyzing neuronal systems is not completely clear [8,9,10], its performance was empirically demonstrated [7], which triggered further applications [11-22]. An advantage of using the Ising model is that several analytical techniques for inverse problems are available [23-29], which allows us to infer couplings between neurons with a feasible computational cost. Another advantage is that it is straightforward to introduce variants of the model. Beyond the conventional data analysis, an important variant is the kinetic Ising model, which is more suitable to take into account the correlations in time, since this extended model removes the symmetric-coupling constraint of the Ising model. A useful mean-field (MF) inverse formula for the kinetic Ising model has been presented in [25,26]. Two problems arise when treating neuronal systems’ data in the framework of the Ising models. The first problem is how to determine an appropriate size of time bins when discretizing original signals in time; the appropriate size differs from the intrinsic time-scale of the original neuronal sys- 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. tems because the Ising models are regarded as a coarse-grained description of the original systems. Hence, the way of the transformation to the models of this type is nontrivial. The second problem is extracting relevant couplings from the solution of the inverse problem; unavoidable noises in experimental data contaminate the inferred couplings, and hence, we need to screen the relevant ones among them. In a previous study [30], an information-theoretic method and a computational-statistical technique were proposed for resolving the aforementioned first and second problems, respectively. Those methods were validated in two cases: in a numerical simulation based on the Izhikevich models and in analyzing in vitro neuronal networks. The result is surprisingly good: their synaptic connections are reconstructed with fairly high accuracy. This finding motivates us to further examine the methods proposed in [30]. Based on this motivation, this study applies these methods to the data from the Hodgkin–Huxley model, which describes the firing dynamics of a biological neuron more accurately than the Izhikevich model. Further, we examine the situation where responses of neuronal networks are evoked by external stimuli. We implement this situation both in the Hodgkin–Huxley model and in a cultured neuronal network of a previously described design [31], and test the methods in both the cases. Besides, based on the previously described MF formula of [25,26], we derive an efficient formula implementing the previous method of screening relevant couplings within a significantly smaller computational cost. In practice, the naive implementation of the screening method is computationally expensive, and can be a bottleneck when applied to large-scale networks. Hence, we exploit the simplicity of the model, and use the advanced statistical processing with reasonable time in this work. Below, we address those three points by employing the simple kinetic Ising model, to efficiently infer synaptic couplings in neuronal networks. 2 Inference procedure The kinetic Ising model consists of N units, {si}Ni=1, and each unit takes bipolar values as si(t) = ±1. Its dynamics is governed by the so-called Glauber dynamics: P (s(t+ 1)|s(t); {Jij , θi(t)}) = N∏ i=1 exp [si(t+ 1)Hi(t; {Jij , θi(t)})] exp [Hi(t; {Jij , θi(t)})] + exp [−Hi(t; {Jij , θi(t)})] , (1) where Hi(t) is the effective field, defined as Hi(t) = θi(t) + ∑N j=1 Jijsj(t), θi(t) is the external force, and Jij is the coupling strength from j to i. This model also corresponds to a generalized McCulloch–Pitts model in theoretical neuroscience and logistic regression in statistics. When applying this to spike train data, we regard the state si(t) = 1 (-1) as the firing (non-firing) state. The inference framework we adopt here is the standard maximum-likelihood (ML) framework. We repeat R experiments and denote a firing pattern {s∗ir(t)}Ni=1 for t = 1, 2, · · · ,M in an experiment r(= 1, 2, · · · , R). The ML framework requires us to solve the following maximization problem on the variable set {Jij , θi(t)}: {Ĵij , θ̂i(t)} = argmax {Jij ,θi(t)} { 1 R R∑ r=1 M∑ t=1 logP (s∗r(t+ 1)| s∗r(t); {Jij , θi(t)}) } . (2) This cost function is concave with respect to {Jij , θi(t)}, and hence, a number of efficient solvers are available [32]. However, we do not directly maximize eq. (2) in this study but instead we employ the MF formula proposed previously [25,26]. The MF formula is reasonable in terms of the computational cost and sufficiently accurate when the dataset size R is large. Moreover, the availability of an analytic formula enables us to construct an effective approximation to reduce the computational cost in the post-processing step, as shown in Sec. 2.3. Unfortunately, in many experimental settings, it is not easy to conduct a sufficient number of independent experiments [33,34], as in the case of Sec. 4. Hence, below we assume the stationarity of any statistics, and ignore the time dependence of θ(t). This allows us to identify the average over time as the ensemble average, which significantly improves statistics. We admit this assumption is not always valid, particularly in the case where time-dependent external forces are present, although we treat such cases in Sec. 3.2 and Sec. 4.2. Despite this limitation, we still stress that the present approach can extract synaptic connections among neurons accurately, although the existence of the time-dependent inputs may decrease its performance. Possible directions to overcome this limitation are discussed in Sec. 5. 2.1 Pre-processing: Discretization of time and binarization of state In the pre-processing step, we have to decide the duration of the interval that should be used to transform the real time to the unit time ∆τ in the Ising scheme. We term ∆τ the bin size. Once the bin size is determined, the whole real time interval [0, T ] is divided into the set of time bins that are labelled as {t}M=T /∆τt=1 . Given this set of the time bins, we binarize the neuron states: if there is no spike train of the neuron i in the time bin with a label t, then s∗i (t) = −1; otherwise s∗i (t) = 1. This is the whole pre-processing step we adopt, and is a commonly used approach [7]. Determination of the bin size ∆τ can be a crucial issue: different values of ∆τ may lead to different results. To determine it in an objective way, we employ an information-theory-based method proposed previously [30]. Following this method, we determine the bin size as ∆τopt = argmax ∆τ ( T ∆τ − 1 )∑ i̸=j Î∆τ (si(t+ 1); sj(t)) , (3) where I∆τ (si(t+ 1); sj(t)) denotes the mutual information between si(t + 1) and sj(t) in the coarse-grained series with ∆τ , and Î∆τ (si(t+ 1); sj(t)) is its plug-in estimator. The explicit formula is Î∆τ (si(t+ 1); sj(t)) = ∑ (α,β)∈{+,−}2 rαβ(i, t+ 1; j, t) log rαβ(i, t+ 1; j, t) rα(i, t+ 1)rβ(j, t) , (4) where r++(i, t + 1; j, t) denotes the realized ratio of the pattern (si(t + 1), sj(t)) = (+1,+1), r++(i, t+1; j, t) ≡ (1/(M − 1))#{(si(t+1), sj(t)) = (+1,+1)}, and the other double-subscript quantities {r+−, r−+, r−−} are defined similarly. Single-subscript quantities are also the realized ratios of the corresponding state, for example, r+(j, t) ≡ (1/M)#{sj(t) = +1}. The meaning of eq. (3) is clear: the formula inside the brace brackets of the right-hand side, hereafter termed gross mutual information, is merely the likelihood of a (null) hypothesis that si(t + 1) and sj(t) are firing without any correlation. The optimal value ∆τopt is chosen to reject this hypothesis most strongly. This can also be regarded as a generalization of the chi-square test. 2.2 Inference algorithm: The MF formula The previously derived MF formula [25,26] is given by ĴMF = A−1DC−1, (5) where µi(t) = ⟨si(t)⟩ , Aij(t) = ( 1− µ2i (t) ) δij , Cij(t) = ⟨si(t)sj(t)⟩ − µi(t)µj(t), Dij(t) = ⟨si(t+ 1)sj(t)⟩ − µi(t+ 1)µj(t). (6) Note that the estimate ĴMF seemingly depends on time, but it is known that the time dependence is very weak and ignorable. Once given ĴMF, the MF estimate of the external field is given as θ̂MFi (t) = tanh −1 (µi(t+ 1))− ∑ j ĴMFij µj(t), (7) although we focus on the couplings between neurons and do not estimate the external force in this study. The literal meaning of the brackets is the ensemble average corresponding to (1/R) ∑R r=1 in eq. (2), but here we identify it as the average over time. Here, we use the time-averaged statistics of {µ, C,D,θ}, as declared above. 2.3 Post-processing: Screening relevant couplings and its fast approximation The basic idea of our screening method is to compare the coupling estimated from the original data with the one estimated from randomized data in which the time series of firing patterns of each neuron is randomly independently permuted. We do not explain the detailed procedures here because similar methods have been described previously [7,30]. Instead, here we state the essential point of the method and derive an approximate formula implementing the screening method in a computationally efficient manner. The key of the method is to compute the probability distribution of Ĵij , P (Ĵij), when applying our inference algorithm to the randomized data. Once we obtain the probability distribution, we can judge how unlikely our original estimate is as compared to the estimates from the randomized data. If the original estimate is sufficiently unlikely, we accept it as a relevant coupling; otherwise, we reject it. Evaluation of the above probability distribution is not easy in general, and hence, it is common to have recourse to numerical sampling, which can be a computational burden. Here, we avoid this problem by computing it in an analytical manner under a reasonable approximation. For the randomized data, we may assume that two neurons si and sj fire independently with fixed means µi and µj , respectively. Under this assumption, by the central limit theorem, each diagonal component of C converges to Cii = 1 − µ2i = Aii, while its non-diagonal component becomes a zero-mean Gaussian variable whose variance is proportional to 1/(M − 1), and is thus, small. All the components of D behave similarly to the non-diagonal ones of C. This consideration leads to the expression Ĵ ranij = ∑ k (A−1)iiDik(C −1)kj ≈ (A−1)iiDij(A−1)jj = 1 (1− µ2i )(1− µ2j ) Dij . (8) By the independence between si and sj , the variance of Dij becomes (1 − µ2i )(1 − µ2j )/(M − 1). Hence the probability P ( |Ĵ ranij | ≥ Φth ) is obtained as P ( |Ĵ ranij | ≥ Φth ) ≈ 1− erf Φth √ (1− µ2i )(1− µ2j )(M − 1) 2 , (9) where erf(x) is the error function defined as erf(x) ≡ 2√ π ∫ x 0 dy e−y 2 . (10) Inserting the absolute value of the original estimate of Ĵij in Φth, we obtain its likelihood, and can judge whether it should be accepted. Below, we set the significance level pth associated with (Φth)ij as (Φth)ij = √ 2 (1− µ2i )(1− µ2j )(M − 1) erf−1 (1− pth) (11) and accept only Ĵij such that |Ĵij | > (Φth)ij . 3 Hodgkin–Huxley networks We first evaluate the accuracy of our methods using synthetic systems consisting of the Hodgkin– Huxley neurons. The dynamics of the neurons are given by C dVi dτ = −ḡKn4i (Vi − EK)− ḡNam3ihi (Vi − ENa)− ḡL (Vi − EL) + Iexi , (12) dni dτ = αn (Vi) (1− ni)− βn (Vi)ni, (13) dmi dτ = αm (Vi) (1−mi)− βm (Vi)mi, (14) dhi dτ = αh (Vi) (1− hi)− βh (Vi)hi, (15) where Vi is the membrane potential of ith neuron, ni is the activation variable that represents the ratio of the open channels for K+ ion, and mi and hi are the activation and inactivation variables for Na+ ion, respectively. All parameters, except the external input term Iexi , are set as described in [35]. The input forces are given by Iexi = ci(τ) + N∑ j=1 KijVjΘ(Vj − Vth) + a ∑ k δ ( τ − τki ) , (16) where ci(t) represents the environmental noise with a Poisson process, the second term represents the couplings with the threshold voltage Vth = 30mV and the Heaviside step function Θ(·), and the last term denotes the impulse stimulations with the delta function. Here, we consider no-delay simple couplings, which we term the synaptic connections, and aim to reconstruct their structure with the excitatory/inhibitory signs using our methods. We use N = 100 neuron networks, where the 90 neurons are excitatory and have positive outgoing couplings while the others are inhibitory. The rate and strength of the Poisson process are set as λ = 180Hz and b = 2mV, respectively, for all neurons. We generate their time series, integrating (12)-(15) by the Euler method with dτ = 0.01ms, where we suppose a neuron is firing when its voltage exceeds Vth, and use the spike train data with the whole period T = 106 ms for our inference. 3.1 Spontaneous activity case At first, we consider a system on a chain network in which each neuron has three synaptic connections to adjoint neurons in one direction. The connection strength Kij is drawn from the uniform distributions in [0.015, 0.03] for the excitatory and in [−0.06,−0.03] for the inhibitory neurons, respectively. Here, we set a = 0mV to study the spontaneous activity. An example of the spike trains generated during 3 seconds is shown in Fig. 1 (a), where the spike times and corresponding neuronal indices are plotted. Subsequently, using the whole spike train data, we calculate the gross mutual information for different ∆τ , and the result is indicated by the red curve in Fig. 1 (b). The curve has the unimodal feature, which implies the existence of the optimal time bin size of approximately ∆τ = 3ms, although the original system does not have the delay. We suppose that inputs must accumulate sufficiently to generate a spike, which costs some time scale, and this is a possible reason for the emergence of the nontrivial time-scale. To validate our approximation (8), we randomize the coarse-grained series with ∆τ = 3ms in the time direction independently, rescale Ĵ ranij by multiplying √ (1− µ2i )(1− µ2j )(M − 1), and compare the results of 1000 randomized data with the standard Gauss distribution in Fig. 1 (c), which shows their good correspondence. Using ∆τ = 3ms to make the spike trains coarse-grained, we apply the inverse formula to the series and screen relevant couplings with pth = 10−3, which leads to the estimated coupling matrix shown in Fig. 1 (e), while the one used to generate the data is shown in Fig. 1 (d). The asymmetric network structure is recovered sufficiently with the discrimination of the signs of the couplings. The conditional ratios of the correctness are shown in Fig. 1 (f), where the inference results obtained with different values of ∆τ are also shown. This demonstrates the fairly accurate reconstruction result obtained using our inference procedure. We also show the receiver operating characteristic (ROC) curves obtained by gradually changing the value pth in Fig. 1 (g), with the different values of ∆τ . We conclude that using non-optimal time bins drastically decreases the accuracy of the inference results. To illustrate the robustness of the optimality of the time bin, in Fig. 1 (i) we plot the means and standard deviations of the gross mutual information through the 10 different simulations, showing that the variance is small enough and the result is well robust. To consider a more general situation, we also employ a Hodgkin–Huxley system on a random network. The directional synaptic connection between every pair of neurons is generated with the probability 0.1, and the excitatory and inhibitory couplings are drawn from the uniform distributions within [0.01, 0.02] and [−0.04,−0.02], respectively. The corresponding inference results for its spontaneous activity are shown by green curves in Figs. 1 (b) and (f). The ROC curves for the three different three values of ∆τ are also shown in (h). We confirm that the inference is sufficiently effective in the random-network system as well as in the chain system. 3.2 Evoked activity case We next investigate performance in systems where responses are evoked by impulse stimuli. The model parameters, except for a, are the same as those in the chain model in Sec. 3.1. The strength of the external force is set as a = 5.3mV, and the stimulations are injected to all neurons with interval 1 s. In Fig. 2 (a) we show the spike trains, where we observe that most of the neurons fire at the injection times τ = 0.5, 1.5, 2.5 s. The gross mutual information against ∆τ is shown in Fig. 2 (b). Although the curve feature is modified due to the existence of the impulse inputs, we observe that its peak is located at a similar value of ∆τ . Therefore, we use the same value ∆τ = 3ms. Applying our inference procedure with ∆τ = 3ms and pth = 10−3, we obtain the inferred couplings which are shown in Fig. 2 (c), where the original network is in Fig. 1 (d). On comparing Fig. 2 (c) with Fig. 1 (e), while the inference detects the existence of the synaptic connections, we observe more false couplings in the evoked case. The conditional ratios in Fig. 2 (d) indicate that the existence of the external inputs may increase the false positive rate with the same pth. The ROC curves are shown in Fig. 2 (f). 4 Cultured neuronal networks We apply our inference methods to the real neuronal systems introduced in a previous study [31], where rat cortical neurons were cultured in micro wells. The wells had a circular structure, and consequently the synapses of the neurons were likely to form a physically asymmetric chain network, which is similar to the situation in the Hodgkin–Huxley models we used in Sec. 3. The activity of the neurons was recorded by the multi-electrode array with 40µs time resolution, and the Efficient Technology of Spike sorting method [36] was used to identify the spike events of individual neurons. We study the spontaneous and evoked activities here. 4.1 Spontaneous activity case We first use the spontaneous activity data recorded during 120 s. The spike sorting identified 100 neurons which generated the spikes. The spike raster plot during 3 seconds is displayed in Fig. 3 (a). We calculate the gross mutual information as in case of the Hodgkin–Huxley models, and the obtained optimal bin size is approximately ∆τ = 5ms. We also confirm that the inferred couplings are similar to the results described previously [30], and this supports the validity of our novel approximation method introduced in Sec. 2.3. We show the inferred network in Figs. 3 (bd) with different values pth = 10−3, 10−6, 10−9, where we locate the nodes denoting the neurons on a circle following the experimental design [31]. A more strict threshold provides us with clear demonstration of the relevant couplings here. 4.2 Evoked activity case We next study an evoked neuronal system, where an electrical pulse stimulation is injected from an electrode after every 3 seconds, and the other experimental settings are similar to those of the spontaneous case. In this case the activity of 149 neurons were identified by the spike sorting. The example of the spike trains is shown in Fig. 4 (a). The gross mutual information is shown in Fig. 4 (b), where we can see the peak around ∆τ = 10ms. Setting ∆τ = 10ms and pth = 10−3, 10−6, we obtain the estimated coupling matrices in Figs. 4 (c,d). In these cases, we can also observe the bold diagonal elements representing the asymmetric chain structure, although with the lower significant level some far-diagonal elements emerge due to the existence of the external inputs, which is a situation similar to that in the Hodgkin–Huxley simulation in Sec. 3.2. The inferred network with the strict threshold pth = 10−9 is displayed in Fig. 4 (e), where some long-range couplings are still estimated while physical connections corresponding to them do not exist because of the experimental design. 5 Conclusion and discussion We propose a systematic inference procedure for extracting couplings from point-process data. The contribution of this study is three-fold: (i) invention of an analytic formula to screen relevant couplings in a computationally efficient manner; (ii) examination in the Hodgkin–Huxley model, with and without impulse stimuli; (iii) examination in an evoked cultured neuronal network. The applications to the synthetic data, with and without the impulse stimuli, demonstrate the fairly accurate reconstructions of synaptic connections by our inference methods. The application to the real data of the spontaneous activity in the cultured neuronal system also highlights the effectiveness of the proposed methods in detecting the synaptic connections. From the comparison between the analyses of the spontaneous and evoked activities, we found that the inference accuracy becomes degraded by the external stimuli. One of the potential origins is the breaking of our stationary assumption of the statistics {µ, C,D} because of the time-varying external force θ. To overcome this, certain techniques resolving the insufficiency of samples, such as regularization, will be helpful. A promising approach might be the introduction of an ℓ1 regularization into eq. (2), which enables us to automatically screen out irrelevant couplings. Comparing it with the present approach based on computational statistics will be an interesting future work. Acknowledgments This work was supported by MEXT KAKENHI Grant Numbers 17H00764 (YT, TO, and YK) and 18K11463 (TO), and RIKEN Center for Brain Science (YT and TI).
1. What are the improvements made by the paper over previous works on capturing directional coupling spike behavior? 2. What are the strengths of the paper regarding its clarity, quality, and originality? 3. How does the reviewer assess the significance of the paper's contributions? 4. What are the limitations of the paper, particularly regarding its application to real data from cortical neurons of rats? 5. Do you have any questions or concerns about the paper's methodology or conclusions?
Review
Review This paper improves on previous work on capturing the directional coupling spike behaviour of large neuronal systems. Previous work used synthetic data from the Izhikevich model of neuronal activity behaviour (a simpler model than the well-known Hodgkin-Huxley set of differential equations), the kinetic Ising model and mean field theory for approximate inference. The present paper develops a slightly simplified analytical solution to the equations, which are subsequently applied to the more sophisticated Hodgkin-Huxley neuronal activity model i(the synthetic data part) and real data from cortical neurons of the rat. It appears that although not perfect, the mathematical model performs well when choosing the right parameters. Quality; well-written and relatively easy to follow paper that has a clear goal and outcome. Clarity: clear structure. Content presumes quite some basic knowledge in neuroscience research, so it may not be too clear for people outside of neuroscience. Originality: the main contribution is the analytical formula (I have no idea whether this is original; I can immagine that similar ideas have been stated before), where the rest of the paper follows previously done research, however in a scholarly way. Significance: moderate.
NIPS
Title Objective and efficient inference for couplings in neuronal networks Abstract Inferring directional couplings from the spike data of networks is desired in various scientific fields such as neuroscience. Here, we apply a recently proposed objective procedure to the spike data obtained from the Hodgkin–Huxley type models and in vitro neuronal networks cultured in a circular structure. As a result, we succeed in reconstructing synaptic connections accurately from the evoked activity as well as the spontaneous one. To obtain the results, we invent an analytic formula approximately implementing a method of screening relevant couplings. This significantly reduces the computational cost of the screening method employed in the proposed objective procedure, making it possible to treat large-size systems as in this study. 1 Introduction Recent advances in experimental techniques make it possible to simultaneously record the activity of multiple units. In neuroscience, multi-electrodes and optical imaging techniques capture largescale behaviors of neuronal networks, which facilitate a deeper understanding of the information processing mechanism of nervous systems beyond the single neuron level [1-6]. This preferable situation, however, involves technical issues in dealing with such datasets because they usually consist of a large amount of high-dimensional data which are difficult to be handled by naive usages of conventional statistical methods. A statistical-physics-based approach for tackling these issues was presented using the Ising model [7]. Although the justification to use the Ising model for analyzing neuronal systems is not completely clear [8,9,10], its performance was empirically demonstrated [7], which triggered further applications [11-22]. An advantage of using the Ising model is that several analytical techniques for inverse problems are available [23-29], which allows us to infer couplings between neurons with a feasible computational cost. Another advantage is that it is straightforward to introduce variants of the model. Beyond the conventional data analysis, an important variant is the kinetic Ising model, which is more suitable to take into account the correlations in time, since this extended model removes the symmetric-coupling constraint of the Ising model. A useful mean-field (MF) inverse formula for the kinetic Ising model has been presented in [25,26]. Two problems arise when treating neuronal systems’ data in the framework of the Ising models. The first problem is how to determine an appropriate size of time bins when discretizing original signals in time; the appropriate size differs from the intrinsic time-scale of the original neuronal sys- 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. tems because the Ising models are regarded as a coarse-grained description of the original systems. Hence, the way of the transformation to the models of this type is nontrivial. The second problem is extracting relevant couplings from the solution of the inverse problem; unavoidable noises in experimental data contaminate the inferred couplings, and hence, we need to screen the relevant ones among them. In a previous study [30], an information-theoretic method and a computational-statistical technique were proposed for resolving the aforementioned first and second problems, respectively. Those methods were validated in two cases: in a numerical simulation based on the Izhikevich models and in analyzing in vitro neuronal networks. The result is surprisingly good: their synaptic connections are reconstructed with fairly high accuracy. This finding motivates us to further examine the methods proposed in [30]. Based on this motivation, this study applies these methods to the data from the Hodgkin–Huxley model, which describes the firing dynamics of a biological neuron more accurately than the Izhikevich model. Further, we examine the situation where responses of neuronal networks are evoked by external stimuli. We implement this situation both in the Hodgkin–Huxley model and in a cultured neuronal network of a previously described design [31], and test the methods in both the cases. Besides, based on the previously described MF formula of [25,26], we derive an efficient formula implementing the previous method of screening relevant couplings within a significantly smaller computational cost. In practice, the naive implementation of the screening method is computationally expensive, and can be a bottleneck when applied to large-scale networks. Hence, we exploit the simplicity of the model, and use the advanced statistical processing with reasonable time in this work. Below, we address those three points by employing the simple kinetic Ising model, to efficiently infer synaptic couplings in neuronal networks. 2 Inference procedure The kinetic Ising model consists of N units, {si}Ni=1, and each unit takes bipolar values as si(t) = ±1. Its dynamics is governed by the so-called Glauber dynamics: P (s(t+ 1)|s(t); {Jij , θi(t)}) = N∏ i=1 exp [si(t+ 1)Hi(t; {Jij , θi(t)})] exp [Hi(t; {Jij , θi(t)})] + exp [−Hi(t; {Jij , θi(t)})] , (1) where Hi(t) is the effective field, defined as Hi(t) = θi(t) + ∑N j=1 Jijsj(t), θi(t) is the external force, and Jij is the coupling strength from j to i. This model also corresponds to a generalized McCulloch–Pitts model in theoretical neuroscience and logistic regression in statistics. When applying this to spike train data, we regard the state si(t) = 1 (-1) as the firing (non-firing) state. The inference framework we adopt here is the standard maximum-likelihood (ML) framework. We repeat R experiments and denote a firing pattern {s∗ir(t)}Ni=1 for t = 1, 2, · · · ,M in an experiment r(= 1, 2, · · · , R). The ML framework requires us to solve the following maximization problem on the variable set {Jij , θi(t)}: {Ĵij , θ̂i(t)} = argmax {Jij ,θi(t)} { 1 R R∑ r=1 M∑ t=1 logP (s∗r(t+ 1)| s∗r(t); {Jij , θi(t)}) } . (2) This cost function is concave with respect to {Jij , θi(t)}, and hence, a number of efficient solvers are available [32]. However, we do not directly maximize eq. (2) in this study but instead we employ the MF formula proposed previously [25,26]. The MF formula is reasonable in terms of the computational cost and sufficiently accurate when the dataset size R is large. Moreover, the availability of an analytic formula enables us to construct an effective approximation to reduce the computational cost in the post-processing step, as shown in Sec. 2.3. Unfortunately, in many experimental settings, it is not easy to conduct a sufficient number of independent experiments [33,34], as in the case of Sec. 4. Hence, below we assume the stationarity of any statistics, and ignore the time dependence of θ(t). This allows us to identify the average over time as the ensemble average, which significantly improves statistics. We admit this assumption is not always valid, particularly in the case where time-dependent external forces are present, although we treat such cases in Sec. 3.2 and Sec. 4.2. Despite this limitation, we still stress that the present approach can extract synaptic connections among neurons accurately, although the existence of the time-dependent inputs may decrease its performance. Possible directions to overcome this limitation are discussed in Sec. 5. 2.1 Pre-processing: Discretization of time and binarization of state In the pre-processing step, we have to decide the duration of the interval that should be used to transform the real time to the unit time ∆τ in the Ising scheme. We term ∆τ the bin size. Once the bin size is determined, the whole real time interval [0, T ] is divided into the set of time bins that are labelled as {t}M=T /∆τt=1 . Given this set of the time bins, we binarize the neuron states: if there is no spike train of the neuron i in the time bin with a label t, then s∗i (t) = −1; otherwise s∗i (t) = 1. This is the whole pre-processing step we adopt, and is a commonly used approach [7]. Determination of the bin size ∆τ can be a crucial issue: different values of ∆τ may lead to different results. To determine it in an objective way, we employ an information-theory-based method proposed previously [30]. Following this method, we determine the bin size as ∆τopt = argmax ∆τ ( T ∆τ − 1 )∑ i̸=j Î∆τ (si(t+ 1); sj(t)) , (3) where I∆τ (si(t+ 1); sj(t)) denotes the mutual information between si(t + 1) and sj(t) in the coarse-grained series with ∆τ , and Î∆τ (si(t+ 1); sj(t)) is its plug-in estimator. The explicit formula is Î∆τ (si(t+ 1); sj(t)) = ∑ (α,β)∈{+,−}2 rαβ(i, t+ 1; j, t) log rαβ(i, t+ 1; j, t) rα(i, t+ 1)rβ(j, t) , (4) where r++(i, t + 1; j, t) denotes the realized ratio of the pattern (si(t + 1), sj(t)) = (+1,+1), r++(i, t+1; j, t) ≡ (1/(M − 1))#{(si(t+1), sj(t)) = (+1,+1)}, and the other double-subscript quantities {r+−, r−+, r−−} are defined similarly. Single-subscript quantities are also the realized ratios of the corresponding state, for example, r+(j, t) ≡ (1/M)#{sj(t) = +1}. The meaning of eq. (3) is clear: the formula inside the brace brackets of the right-hand side, hereafter termed gross mutual information, is merely the likelihood of a (null) hypothesis that si(t + 1) and sj(t) are firing without any correlation. The optimal value ∆τopt is chosen to reject this hypothesis most strongly. This can also be regarded as a generalization of the chi-square test. 2.2 Inference algorithm: The MF formula The previously derived MF formula [25,26] is given by ĴMF = A−1DC−1, (5) where µi(t) = ⟨si(t)⟩ , Aij(t) = ( 1− µ2i (t) ) δij , Cij(t) = ⟨si(t)sj(t)⟩ − µi(t)µj(t), Dij(t) = ⟨si(t+ 1)sj(t)⟩ − µi(t+ 1)µj(t). (6) Note that the estimate ĴMF seemingly depends on time, but it is known that the time dependence is very weak and ignorable. Once given ĴMF, the MF estimate of the external field is given as θ̂MFi (t) = tanh −1 (µi(t+ 1))− ∑ j ĴMFij µj(t), (7) although we focus on the couplings between neurons and do not estimate the external force in this study. The literal meaning of the brackets is the ensemble average corresponding to (1/R) ∑R r=1 in eq. (2), but here we identify it as the average over time. Here, we use the time-averaged statistics of {µ, C,D,θ}, as declared above. 2.3 Post-processing: Screening relevant couplings and its fast approximation The basic idea of our screening method is to compare the coupling estimated from the original data with the one estimated from randomized data in which the time series of firing patterns of each neuron is randomly independently permuted. We do not explain the detailed procedures here because similar methods have been described previously [7,30]. Instead, here we state the essential point of the method and derive an approximate formula implementing the screening method in a computationally efficient manner. The key of the method is to compute the probability distribution of Ĵij , P (Ĵij), when applying our inference algorithm to the randomized data. Once we obtain the probability distribution, we can judge how unlikely our original estimate is as compared to the estimates from the randomized data. If the original estimate is sufficiently unlikely, we accept it as a relevant coupling; otherwise, we reject it. Evaluation of the above probability distribution is not easy in general, and hence, it is common to have recourse to numerical sampling, which can be a computational burden. Here, we avoid this problem by computing it in an analytical manner under a reasonable approximation. For the randomized data, we may assume that two neurons si and sj fire independently with fixed means µi and µj , respectively. Under this assumption, by the central limit theorem, each diagonal component of C converges to Cii = 1 − µ2i = Aii, while its non-diagonal component becomes a zero-mean Gaussian variable whose variance is proportional to 1/(M − 1), and is thus, small. All the components of D behave similarly to the non-diagonal ones of C. This consideration leads to the expression Ĵ ranij = ∑ k (A−1)iiDik(C −1)kj ≈ (A−1)iiDij(A−1)jj = 1 (1− µ2i )(1− µ2j ) Dij . (8) By the independence between si and sj , the variance of Dij becomes (1 − µ2i )(1 − µ2j )/(M − 1). Hence the probability P ( |Ĵ ranij | ≥ Φth ) is obtained as P ( |Ĵ ranij | ≥ Φth ) ≈ 1− erf Φth √ (1− µ2i )(1− µ2j )(M − 1) 2 , (9) where erf(x) is the error function defined as erf(x) ≡ 2√ π ∫ x 0 dy e−y 2 . (10) Inserting the absolute value of the original estimate of Ĵij in Φth, we obtain its likelihood, and can judge whether it should be accepted. Below, we set the significance level pth associated with (Φth)ij as (Φth)ij = √ 2 (1− µ2i )(1− µ2j )(M − 1) erf−1 (1− pth) (11) and accept only Ĵij such that |Ĵij | > (Φth)ij . 3 Hodgkin–Huxley networks We first evaluate the accuracy of our methods using synthetic systems consisting of the Hodgkin– Huxley neurons. The dynamics of the neurons are given by C dVi dτ = −ḡKn4i (Vi − EK)− ḡNam3ihi (Vi − ENa)− ḡL (Vi − EL) + Iexi , (12) dni dτ = αn (Vi) (1− ni)− βn (Vi)ni, (13) dmi dτ = αm (Vi) (1−mi)− βm (Vi)mi, (14) dhi dτ = αh (Vi) (1− hi)− βh (Vi)hi, (15) where Vi is the membrane potential of ith neuron, ni is the activation variable that represents the ratio of the open channels for K+ ion, and mi and hi are the activation and inactivation variables for Na+ ion, respectively. All parameters, except the external input term Iexi , are set as described in [35]. The input forces are given by Iexi = ci(τ) + N∑ j=1 KijVjΘ(Vj − Vth) + a ∑ k δ ( τ − τki ) , (16) where ci(t) represents the environmental noise with a Poisson process, the second term represents the couplings with the threshold voltage Vth = 30mV and the Heaviside step function Θ(·), and the last term denotes the impulse stimulations with the delta function. Here, we consider no-delay simple couplings, which we term the synaptic connections, and aim to reconstruct their structure with the excitatory/inhibitory signs using our methods. We use N = 100 neuron networks, where the 90 neurons are excitatory and have positive outgoing couplings while the others are inhibitory. The rate and strength of the Poisson process are set as λ = 180Hz and b = 2mV, respectively, for all neurons. We generate their time series, integrating (12)-(15) by the Euler method with dτ = 0.01ms, where we suppose a neuron is firing when its voltage exceeds Vth, and use the spike train data with the whole period T = 106 ms for our inference. 3.1 Spontaneous activity case At first, we consider a system on a chain network in which each neuron has three synaptic connections to adjoint neurons in one direction. The connection strength Kij is drawn from the uniform distributions in [0.015, 0.03] for the excitatory and in [−0.06,−0.03] for the inhibitory neurons, respectively. Here, we set a = 0mV to study the spontaneous activity. An example of the spike trains generated during 3 seconds is shown in Fig. 1 (a), where the spike times and corresponding neuronal indices are plotted. Subsequently, using the whole spike train data, we calculate the gross mutual information for different ∆τ , and the result is indicated by the red curve in Fig. 1 (b). The curve has the unimodal feature, which implies the existence of the optimal time bin size of approximately ∆τ = 3ms, although the original system does not have the delay. We suppose that inputs must accumulate sufficiently to generate a spike, which costs some time scale, and this is a possible reason for the emergence of the nontrivial time-scale. To validate our approximation (8), we randomize the coarse-grained series with ∆τ = 3ms in the time direction independently, rescale Ĵ ranij by multiplying √ (1− µ2i )(1− µ2j )(M − 1), and compare the results of 1000 randomized data with the standard Gauss distribution in Fig. 1 (c), which shows their good correspondence. Using ∆τ = 3ms to make the spike trains coarse-grained, we apply the inverse formula to the series and screen relevant couplings with pth = 10−3, which leads to the estimated coupling matrix shown in Fig. 1 (e), while the one used to generate the data is shown in Fig. 1 (d). The asymmetric network structure is recovered sufficiently with the discrimination of the signs of the couplings. The conditional ratios of the correctness are shown in Fig. 1 (f), where the inference results obtained with different values of ∆τ are also shown. This demonstrates the fairly accurate reconstruction result obtained using our inference procedure. We also show the receiver operating characteristic (ROC) curves obtained by gradually changing the value pth in Fig. 1 (g), with the different values of ∆τ . We conclude that using non-optimal time bins drastically decreases the accuracy of the inference results. To illustrate the robustness of the optimality of the time bin, in Fig. 1 (i) we plot the means and standard deviations of the gross mutual information through the 10 different simulations, showing that the variance is small enough and the result is well robust. To consider a more general situation, we also employ a Hodgkin–Huxley system on a random network. The directional synaptic connection between every pair of neurons is generated with the probability 0.1, and the excitatory and inhibitory couplings are drawn from the uniform distributions within [0.01, 0.02] and [−0.04,−0.02], respectively. The corresponding inference results for its spontaneous activity are shown by green curves in Figs. 1 (b) and (f). The ROC curves for the three different three values of ∆τ are also shown in (h). We confirm that the inference is sufficiently effective in the random-network system as well as in the chain system. 3.2 Evoked activity case We next investigate performance in systems where responses are evoked by impulse stimuli. The model parameters, except for a, are the same as those in the chain model in Sec. 3.1. The strength of the external force is set as a = 5.3mV, and the stimulations are injected to all neurons with interval 1 s. In Fig. 2 (a) we show the spike trains, where we observe that most of the neurons fire at the injection times τ = 0.5, 1.5, 2.5 s. The gross mutual information against ∆τ is shown in Fig. 2 (b). Although the curve feature is modified due to the existence of the impulse inputs, we observe that its peak is located at a similar value of ∆τ . Therefore, we use the same value ∆τ = 3ms. Applying our inference procedure with ∆τ = 3ms and pth = 10−3, we obtain the inferred couplings which are shown in Fig. 2 (c), where the original network is in Fig. 1 (d). On comparing Fig. 2 (c) with Fig. 1 (e), while the inference detects the existence of the synaptic connections, we observe more false couplings in the evoked case. The conditional ratios in Fig. 2 (d) indicate that the existence of the external inputs may increase the false positive rate with the same pth. The ROC curves are shown in Fig. 2 (f). 4 Cultured neuronal networks We apply our inference methods to the real neuronal systems introduced in a previous study [31], where rat cortical neurons were cultured in micro wells. The wells had a circular structure, and consequently the synapses of the neurons were likely to form a physically asymmetric chain network, which is similar to the situation in the Hodgkin–Huxley models we used in Sec. 3. The activity of the neurons was recorded by the multi-electrode array with 40µs time resolution, and the Efficient Technology of Spike sorting method [36] was used to identify the spike events of individual neurons. We study the spontaneous and evoked activities here. 4.1 Spontaneous activity case We first use the spontaneous activity data recorded during 120 s. The spike sorting identified 100 neurons which generated the spikes. The spike raster plot during 3 seconds is displayed in Fig. 3 (a). We calculate the gross mutual information as in case of the Hodgkin–Huxley models, and the obtained optimal bin size is approximately ∆τ = 5ms. We also confirm that the inferred couplings are similar to the results described previously [30], and this supports the validity of our novel approximation method introduced in Sec. 2.3. We show the inferred network in Figs. 3 (bd) with different values pth = 10−3, 10−6, 10−9, where we locate the nodes denoting the neurons on a circle following the experimental design [31]. A more strict threshold provides us with clear demonstration of the relevant couplings here. 4.2 Evoked activity case We next study an evoked neuronal system, where an electrical pulse stimulation is injected from an electrode after every 3 seconds, and the other experimental settings are similar to those of the spontaneous case. In this case the activity of 149 neurons were identified by the spike sorting. The example of the spike trains is shown in Fig. 4 (a). The gross mutual information is shown in Fig. 4 (b), where we can see the peak around ∆τ = 10ms. Setting ∆τ = 10ms and pth = 10−3, 10−6, we obtain the estimated coupling matrices in Figs. 4 (c,d). In these cases, we can also observe the bold diagonal elements representing the asymmetric chain structure, although with the lower significant level some far-diagonal elements emerge due to the existence of the external inputs, which is a situation similar to that in the Hodgkin–Huxley simulation in Sec. 3.2. The inferred network with the strict threshold pth = 10−9 is displayed in Fig. 4 (e), where some long-range couplings are still estimated while physical connections corresponding to them do not exist because of the experimental design. 5 Conclusion and discussion We propose a systematic inference procedure for extracting couplings from point-process data. The contribution of this study is three-fold: (i) invention of an analytic formula to screen relevant couplings in a computationally efficient manner; (ii) examination in the Hodgkin–Huxley model, with and without impulse stimuli; (iii) examination in an evoked cultured neuronal network. The applications to the synthetic data, with and without the impulse stimuli, demonstrate the fairly accurate reconstructions of synaptic connections by our inference methods. The application to the real data of the spontaneous activity in the cultured neuronal system also highlights the effectiveness of the proposed methods in detecting the synaptic connections. From the comparison between the analyses of the spontaneous and evoked activities, we found that the inference accuracy becomes degraded by the external stimuli. One of the potential origins is the breaking of our stationary assumption of the statistics {µ, C,D} because of the time-varying external force θ. To overcome this, certain techniques resolving the insufficiency of samples, such as regularization, will be helpful. A promising approach might be the introduction of an ℓ1 regularization into eq. (2), which enables us to automatically screen out irrelevant couplings. Comparing it with the present approach based on computational statistics will be an interesting future work. Acknowledgments This work was supported by MEXT KAKENHI Grant Numbers 17H00764 (YT, TO, and YK) and 18K11463 (TO), and RIKEN Center for Brain Science (YT and TI).
1. What is the focus and contribution of the paper regarding neural networks? 2. What are the strengths of the proposed method, particularly its ability to reconstruct connections accurately? 3. What are the limitations of the work, such as bin size selection and lack of comparison with other methods? 4. How does the reviewer assess the quality and relevance of the references provided in the paper? 5. What concerns does the reviewer have regarding synaptic delays and their impact on estimated connection graphs?
Review
Review This contribution proposes a method to reconstruct the connection in a neural network based on the spike recording. The method infers coupling, only considering relevant connections. The proposed method is evaluated on simulated data, generated by Hodgkin-Huxley neurons and on real data obtained with cultured neuronal networks. The experiments on real data is appreciated and the obtained results are impressive, as the reconstructed networks seems very accurate. Some of the limitations of this work concern the careful selection of the bin size $\delta \tau$, as a costly parameter validation should be conducted to select the bin size correctly. It could be interesting to show the robustness of the proposed method with respect to bin size. Another limitation is the lack of comparison with state of the art methods, the authors mentioned previous work based on Ising models but omit works relying on transfer entropy [1], on spike metrics [2] or on inverse covariance [3]. Also the bibliography could be shortened up: for example, eleven references are provided for applicative work in one time and are not really detailed. The experimental results indicate that evoked activity led by external stimuli degrades the quality of the estimated connection graph. This could be a consequence of the lack of the system to take into account synaptic delays. How non-null delays are process by this model? [1] Ito, S., Hansen, M. E., Heiland, R., Lumsdaine, A., Litke, A. M., & Beggs, J. M. (2011). Extending transfer entropy improves identification of effective connectivity in a spiking cortical network model. PloS one, 6(11), e27431. [2] Kuroda, K., Fujiwara, K., & Ikeguchi, T. (2012, November). Identification of neural network structure from multiple spike sequences. In International Conference on Neural Information Processing (pp. 184-191). Springer, Berlin, Heidelberg. [3] Mohler, G. (2014). Learning convolution filters for inverse covariance estimation of neural network connectivity. In Advances in Neural Information Processing Systems (pp. 891-899). ---- Edit: After rebuttal period and based on the authors reply, my main questions have been addressed and I changed my overall score appreciation accordingly.
NIPS
Title Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost Abstract To overcome the quadratic cost of self-attention, recent works have proposed various sparse attention modules, most of which fall under one of two groups: 1) sparse attention under a hand-crafted patterns and 2) full attention followed by a sparse variant of softmax such as α-entmax. Unfortunately, the first group lacks adaptability to data while the second still requires quadratic cost in training. In this work, we propose SBM-Transformer, a model that resolves both problems by endowing each attention head with a mixed-membership Stochastic Block Model (SBM). Then, each attention head data-adaptively samples a bipartite graph, the adjacency of which is used as an attention mask for each input. During backpropagation, a straight-through estimator is used to flow gradients beyond the discrete sampling step and adjust the probabilities of sampled edges based on the predictive loss. The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input. By assessing the distribution of graphs, we theoretically show that SBM-Transformer is a universal approximator for arbitrary sequence-to-sequence functions in expectation. Empirical evaluations under the LRA and GLUE benchmarks demonstrate that our model outperforms previous efficient variants as well as the original Transformer with full attention. Our implementation can be found in https://github.com/sc782/SBM-Transformer. 1 Introduction The Transformer [38] architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation [28], image classification [14], and protein language modeling [32]. Its key strength stems from the multi-head attention module, where a so-called attention score matrix computes how contextually important one token is to another for all possible token pairs. Each Transformer layer simultaneously pools the token representations based on the attention scores, eventually returning contextualized features without sequentially traversing through the input sequence as its recurrent neural network-based predecessors [16]. A well-known drawback of the original Transformer is its high computational cost in time and memory that increases quadratically with sequence length. This is due to the full pairwise computation of attention scores, which prohibits applying it in tasks involving long-range dependencies such as document summarization [17] or high-resolution image processing [48]. Many works have thus focused on developing more efficient alternatives by exploiting fixed or learnable attention sparsity patterns [8, 46, 20, 12], low-rank approximations [40, 43], or kernelized attention modules [19, 9]. Even though the efficient alternatives hold theoretical expressibility guarantees [45], they are far from sufficient, still failing to convince practitioners to replace the original Transformer. We believe this is mostly due to their lack of adaptability. They apply the same modifications to unanimously sparsify all the attention modules across layers, without considering the tasks at hand. Such strategy 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 imposes inductive bias too strongly and often leads to sub-optimal cost vs. performance trade-offs in downstream tasks [27]. In this work, we argue that to retain the utmost potential of Transformers, each attention module should have the ability to flexibly choose between sparse and full attention. This is especially evident when considering many state-of-the-art systems suggest the need for a mixture of dense and sparse attention layers. For example, a qualitative analysis on pretrained BERT showed that lower layers exhibit broad dense attention while upper layers perform focused sparse attention [10]. In the case of GPT-3 [6], the Transformer blocks are manually arranged to alternate between dense and sparse attention. To contribute to the efficient Transformers lineage, we propose SBM-Transformer, capable of adjusting its attention sparsity data-adaptively based without fully computing the attention score matrix (Figure 1). Leveraging a mixed-membership Stochastic Block Model (SBM) [2], each attention head samples a bipartite graph connecting queries to keys. Then, the adjacency of the sampled graph is used as an attention mask so that only attention scores corresponding to sampled edges are computed. The overall computational cost is linear in the number of edges, which can range from linear to quadratic in sequence length depending on the data and task under concern. Each attention head is equipped with its own underlying SBM, enabling the model to diversify the attention sparsity across heads and layers. By incorporating a straight-through estimator [4] in the discrete graph-sampling step, SBM-Transformer enjoys end-to-end differentiability and can find the proper attention sparsity based solely upon minimizing the predictive loss. The model can also easily be further regularized by penalizing the number of sampled edges, which results in a lighter model using less computational resources during inference. To the best of our knowledge, our method is the first Transformer architecture that can data-adaptively choose between linear to full attention with respective computational costs. To summarize, our main contributions are as follows: • We present SBM-Transformer, a novel Transformer of which each attention head can adaptively adjust its attention sparsity as well as computational cost based on the input data. • To demonstrate the benefit of this flexibility, we theoretically prove that SBM-Transformer retains universal approximability, and also stress-test the model under a synthetic task where full attention is required to achieve 100% accuracy. • Evaluations on LRA and GLUE benchmarks show that SBM-Transformer outperforms previous efficient Transformer models as well as the vanilla Transformer with dense attention. 2 Related Work In this section we discuss previous efficient Transformer variants and several works similar to ours with respect to adaptively learning sparse attention patterns. We also review several works on SBMs. Efficient Transformers. Many efficient Transformers tackle to reduce the quadratic cost of multihead attention with different approaches. While we discuss only a handful of representative approaches, a much more comprehensive survey can be found in [37]. The Linear Transformer [19] achieves linear complexity by replacing the softmax with a low-rank kernelized function. Linformer [40] and Nyströmformer [43] use a similar approach by low-rank approximating the attention score matrix. Performer [9] uses positive orthogonal random features to approximate the softmax kernel. Reformer [20] gathers similar tokens together through locality-sensitive hashing (LSH) and performs attention amongst tokens within the same bucket. Of all methods above, our method is most similar to Reformer, in the sense that we adaptively assign queries and keys into clusters and form a low-rank sparse attention pattern. However, our method performs soft-clustering with much less structural constraints, allowing each attention head to represent a wider variety of dependency structure and to adjust its sparsity towards full attention if needed. Adaptive Sparsity. With respect to flexible training between sparse and dense attention, there exist some works that parameterize how sparse the attention pattern should be based on the input. The Adaptive Sparse Transformer [11] proposed replacing the usual softmax activation with α-entmax, in which the α parameter can be differentiably trained to adjust the activation between softmax and sparsemax activation [25]. SparseBERT [34] uses a differentiable masking technique where each attention mask is sampled from a Gumbel-sigmoid distribution using data-independent mask probability parameters. While these methods possess the flexibility to adjust between sparse and full attention based on data, they still require full computation of the attention score matrix before sparsification, and hence are unable to leverage the learned sparsity towards better model efficiency. To the best of our knowledge, ours is the first work to be able to adaptively tune its attention sparsity between sparse to full attention without requiring the explicit computation of the attention score matrix, thereby avoiding quadratic cost when possible. Stochastic Block Models. The Stochastic Block Model (SBM) is a generative model that encodes the latent structure of graphs by grouping nodes into clusters. By modeling the cluster-membership of each node as well as inter-cluster relationships, SBMs can represent a wide variety of graph structures, which is a feature especially useful for generating new graphs or predicting missing edges in noisy data [1]. The standard SBM assigns each node to a single cluster, and the probability of an edge between two nodes strictly depends on the corresponding clusters. Several structural extensions include overlapping SBM [22] and mixed-membership SBM [2], which allow each node to be assigned to multiple clusters. The underlying SBM used by our framework mostly resembles these two variants, while the edge probability is modeled by a nonlinear function of two node embeddings rather than a bilinear one. There exist many other extensions including degree-corrected SBM [18] for multi-graphs and hierarchical SBM [29] for multiplex-graphs. Further details can be found in a recent survey [15]. 3 Preliminaries: Sparse Transformers We first introduce the full attention mechanism used in the original Transformer [38] as well as masked attention which will serve as a backbone of our approach. 3.1 Full Attention In vanilla Transformer [38], each attention head takes a sequence of token features as input X ∈ Rn×d where n is the sequence length and d the embedding dimension. Weight parameters WQ,WK ∈ Rd×dh and W V ∈ Rd×dh with head-dimension dh first maps the input features X into query Q, key K, and value V , respectively. Then, the attention score matrix is computed with scaled dot-product of queries and keys followed by row-wise softmax activation σ(·). Note that explicit computation of this matrix is the main bottleneck of full attention, incurring O(n2) asymptotic cost in both time and memory. The value features V are then pooled based on the attention scores, returning the output token representations. Altogether, the operation performed by each attention head can be written as Q = XWQ, K = XWK , V = XW V (1) Attn(X) = σ ( QKT√ dh ) V . (2) 3.2 Masked Attention One way to remove the quadratic bottleneck from the attention score matrix is to apply a binary mask M ∈ {0, 1}n×n and compute the scaled dot-products QiKTj / √ dh only if Mij = 1. In presence of an attention mask, the operation is modified to Attnmask(X,M) = σM ( M ⊙ QK T √ dh ) V (3) σM (A)ij := exp(Aij)∑ k∈{k′|Mik′=1} exp(Aik) if Mij = 1 0 otherwise (4) where ⊙ indicates entry-wise multiplication. Note that the masked-softmax σM (·) operator only computes unmasked terms, ensuring that each (i, j)-th attention score survives as nonzero if and only if Mij = 1. This is thus equivalent to filling in the (i, j)-th attention score with −∞ if Mij = 0, then applying the standard softmax operator. Most sparsity-based efficient Transformers fall under this formulation, while using different methods to either manually fix or learn the mask M . For instance, local attention [8, 3, 46] with a sliding window sets Mij = 1 if |i− j| < c for some context window size c while Reformer [20] sets Mij = 1 if Qi and Kj are hashed into the same bucket. 4 Our Method: SBM-Transformer Here we discuss the details of SBM-Transformer (Figure 2). We first illustrate the forward step of our attention module and how the underlying SBM [2] of each head, from which we sample our attention masks, is parameterized by the input tensors. We then discuss how the model enables end-to-end differentiability despite the discrete graph sampling step. 4.1 Forward step with the Stochastic Block Model In our framework, we view the attention mask M as an adjacency matrix of a bipartite graph that connects queries to keys, and let each attention head sample an adjacency matrix that best represents the contextual dependencies amongst input tokens. In order to efficiently sample adjacency matrices while avoiding the quadratic cost, the distribution of graphs must first be parameterized with a sub-quadratic number of latent variables. Stochastic Block Models fit perfectly for our purpose as it models graphs that are low-rank structured with k latent clusters, allowing full parameterization using O(nk) memory. More concretely, the SBM distribution is defined by two nonnegative nodeto-cluster memberships Y ,Z ∈ Rn×k+ and a so-called block matrix B ∈ Rk×k+ that stores the inter-cluster connection probabilities. The probability of node i being connected to node j is computed as p(i, j) = YiBZTj . Equivalently, the expectation of the adjacency matrix sampled from A ∼ SBM(Y ,B,Z) can be written as E[A] = Y BZT . For proper parameterization of the SBM, we must infer the nonnegative node-memberships and block matrix from the queries and keys. To do so, we equip each attention head a 2-layer MLPdh→dh with ReLU activation, and a set of k trainable cluster-embeddings C ∈ Rk×dh . First, our model computes the block matrix Ŝ ∈ Rk×k+ by taking dot products amongst cluster-embeddings C followed by a 2-dimensional softmax activation. The node embeddings are obtained by processing each query and key through the MLPdh→dh , mapping token representations into the node representation space. The memberships of query and key nodes, which we denote by Q̂ and K̂, are then inferred by taking dot products of node and cluster embeddings, followed by a sigmoid function. The block matrix Ŝ, query node-memberships Q̂, and key node-memberships K̂ altogether provide a well-defined parameterization for the SBM. Thus, a bipartite graph adjacency M ∈ {0, 1}n×m can be sampled from M ∼ SBM(Q̂, Ŝ, K̂) with expectation E[M ] = Q̂ŜK̂T : the probability of connecting query Qi to key Kj equals p(i, j) = Q̂iŜK̂Tj . Formally, the sampling procedure can be written as Algorithm 1: fastRG(Y ,B,Z)[33] Input :Y ∈ Rn×k+ , B ∈ Rk×k+ , Z ∈ Rn×k+ Output :M ∈ {0, 1}n×n with E[M ] = Y BZT 1 Compute diagonal matrices DY = (diag(1Y ))−1 and DZ = (diag(1Z))−1 2 Column-normalize Y = Y D−1Y and Z = ZD −1 Z 3 Compute B = DY BDZ 4 Sample number of edges m ∼ Poisson(1B1T ) 5 Initialize M = 0 6 for i = 1 : m do 7 Sample (U, V ) from {1, . . . , k} × {1, . . . , k} with Pr(U = u, V = v) ∝ Buv 8 Sample source I from {1, . . . , n} with Pr(I = i) = Y iU . 9 Sample destination J from {1, . . . , n} with Pr(J = j) = ZjV 10 Set MIJ = 1. 11 end Ŝ = softmax(CCT ) (5) Q̂ = sigmoid(MLPdh→dh(Q)C T ) (6) K̂ = sigmoid(MLPdh→dh(K)C T ) (7) M ∼ SBM(Q̂, Ŝ, K̂) (8) For the last sampling step, we incorporate a fast random graph sampling algorithm fastRG (Alg. 1, [33]) that can sample graphs from a SBM in time and memory asymptotically linear in the number of edges. One advantage of fastRG is that each edge can be sampled in parallel, allowing high efficiency with the help of multiprocessing. A more significant feature of the method is that the number of edges, which determines the overall cost, is sampled from a Poisson distribution with input-dependent mean (Line 4). Thus, the model can dynamically adjust its computational cost between linear and quadratic in sequence length based on the data. Figure 3 shows example placements of nodes and clusters on the dh-dimensional space to show how the sparse structure is determined. If all nodes and clusters are gathered closely, then all entries in Q̂ and K̂ become close to 1, resulting in p(i, j) ≈ 1 for all i, j and hence a dense M . If clusters are well-separated but each surrounded by some set of nodes, Ŝ becomes close to diagonal while each row in Q̂ and K̂ is close to a one-hot vector indicating the cluster nearby. Such setting leads to a block diagonal mask similar to LSH bucketing of Reformer [20]. Lastly, if all clusters are far apart from the nodes, both Q̂ and K̂ approximately equal zero, zeroing out all the edge probabilities. 4.2 Backward Step with Straight-Through Estimator The graph sampling procedure is naturally a discrete operation. Thus, naive backpropagation cannot learn the proper parameterization for the SBM that minimizes the predictive loss. To cope with this non-differentiability, we incorporate a Straight-Through Estimator (STE) [4] to pass the gradient beyond the discrete sampling step. The STE enables providing the gradient ∂L/∂Mij to the probability for each sampled edge (i, j) (Eqn. 9). It works as if we had used a continuous mask M ⊙ E[M ] that stores the probability of each sampled edge instead of the binary mask M during forward propagation. This way, the probabilities of sampled edges can be learned end-to-end: the gradients provide information on whether each sampled edge was useful or not for prediction. ∂L ∂pij := ∂L ∂Mij = ∂L ∂Aij · QiK T j√ dh if Mij = 1 0 otherwise where A := M ⊙ QK T √ dh (9) (b) Block-diagonal(a) Dense (c) Sparse Random Edge Exploration. While this approach enables backpropagation in the same O(m) cost as in the forward step, this comes at the expense of not being able to propagate information through edges that were not sampled. This can be problematic when an edge probability accidentally collapses to zero, after which the edge becomes unlikely to ever be sampled even when it may be useful for the prediction task at hand. Therefore, we add a small perturbation δ > 0 to each edge probability pij , allowing the model to explore new edges and resuscitate their sampling probabilities if necessary. We find that a δ as small as 0.01 significantly helps in practice, and thus use this edge exploration scheme during training for our experiments. Wouldn’t the model always prefer full attention? Note that the gradient ∂L/∂pij can be positive, which suppresses the probability of edge (i, j). At first, it may seem counter-intuitive why the model would ever limit itself to using fewer edges during training without any sparsity-based regularizations. One explanation is that masked attention provides an easy way to reduce attention scores under finite head dimensions. Under full attention, it is known that the representational space of attention score matrices is limited by the head dimension and softmax activation [5]. This limitation inevitably introduces unwanted noise in the attention scores especially when working with long sequences. In SBM-Transformer, however, the structural sparsity in masked attention introduces another dimension that induces a larger space of row-stochastic matrices (full attention is a special case of masked attention where Mij = 1 for all i, j). Therefore, it is reasonable that the model may encourage sparsity to leverage the additional expressiveness assuming the loss landscape has local optima within the sparse attention regime. Our experiments on the LRA benchmark show that this is indeed the case, as our SBM-Transformer converges to an average attention sparsity of 20% to 30% while outperforming Transformer with full attention. We also show in the experiment that we can easily incorporate additional regularization that further encourages sparse attention masks. 4.3 SBM-Transformer is a Universal Approximator Leveraging previous work on the theoretical expressiveness of sparse attention [45, 46], we show that SBM-Transformer with a small modification1 retains the same level of expressibility as full attention. Specifically, we show that the low-rank structure of the underlying SBMs does not degrade the expressive power of Transformer, and that SBM-Transformer can universally approximate arbitrary functions with O(n) connections. For brevity, we provide a rough overview of the proof and defer further details to Appendix A. Theorem 1. Let f ∈ F be class of continuous sequence-to-sequence functions. T h,r,mSBM denote the class of SBM-Transformers with h attention heads, m head dimension, and r dimensions in hidden layers. Then for any ϵ > 0 and 1 ≤ p < ∞, there exists a function g ∈ T h,m,rSBM such that∫ D ∥f(X)− E[g(X)]∥ppdX ≤ ϵ (10) 1Here we consider a variant of SBM-Transformer where self-loops are added manually (i.e. Mii = 1 for all i). While this is useful in theoretical analysis, we find that not having self-loops slightly helps in empirical performance and hence omit self-loops for the main experiments. According to the main theorem of Yun et al. (2020) [44], SBM-Transformer achieves universal approximability if 1) each node attends to itself, 2) the aggregation of all attention patterns contains a Hamiltonian path, and 3) there exists a path between all node pairs. While the first condition is trivially true due to our modification, the other two conditions require careful choice of three SBMs. Here we first parameterize one SBM to hard-assign tokens into k equally-sized clusters, inducing a block-diagonal attention pattern. The other two SBMs are parameterized such that the two graphs together form a star graph with k global relay tokens. Combining the three attention patterns lead to a parameterization of SBM-Transformer that satisfies all three conditions, hence proving the theorem. 5 Experiments For empirical evaluations, we first use a synthetic task to show that our model is flexible enough to learn towards full attention when needed in contrast to previous works. We then experiment on Long Range Arena (LRA) [36], a benchmark widely used to assess the capacity of efficient Transformers in learning long-range contexts across different modalities. Lastly, we show results on the GLUE benchmark [39] to assess the performance of SBM-Transformer in a downstream NLP setting. All experiments were run on a remote GCP server equipped with 16 NVIDIA A100 Tensor Core GPUs. 5.1 Synthetic Task: Finding Repeated Tokens Dataset. We formulate a token-level binary classification task as follows: each input sequence consists of N integers, each of which is uniformly sampled from {1, 2, . . . , N}. We use N = 256 in our setup. The prediction target is a sequence of equal length, where each token is labeled 1 if there exists a duplicate somewhere within the sequence, and 0 otherwise. Below is a simple example with N = 8 that illustrates the task. We measure the performance of models via binary cross-entropy loss. Input: 1 4 3 7 3 2 3 1⇒ Target: 1 0 1 0 1 0 1 1 Methods. For this task, we compare SBM-Transformer with k = 128 clusters against various efficient Transformers: Linear Transformer [19], Linformer [40], Reformer [20], Performer [9], and Nyströmformer [43]. Across all methods, we use a single-layer and single-head architecture with 32 hidden dimensions. Note that due to this constrained setting, the sole head must perform full attention to compare each token to all the others in order to attain 100% accuracy. All models are trained for 2000 epochs where a new batch of sequences is sampled on-the-fly at each epoch. We use a batch size of 256 and learning rate of 1e-3. Results. Figure 4 shows the training loss curves of each baseline method as well as SBMTransformer. Full attention quickly converges to 100% accuracy, which is expected as it computes all possible pairwise interactions by default. Other models that apply low-rank or kernelized attention fail to achieve the same level of accuracy, due to limited expressibility under the constrained setting. Though SBM-Transformer converges more slowly compared to full-attention, it demonstrates the ability to drive itself towards full-attention, eventually attaining zero loss. 5.2 Long Range Arena (LRA) To demonstrate that the flexible inductive bias of SBM-Transformer is effective for modeling longrange dependencies, we test SBM-Transformer against previous work on the LRA benchmark. We also test how the performance is affected with respect to applying a sparsity-based regularizer. Dataset. LRA [36] consists of five different testbeds with varying modalities: LISTOPS [26] is a 10-way classification task to map a sequence of single-digit numbers and 4 different set operations, to its corresponding solution. TEXT [24] is a binary classification task where byte-level IMDB movie reviews must be classified into one of positive or negative sentiments. RETRIEVAL [30] is also a char-level binary classification task, where two sequences from ACL Anthology papers are given as input, and the model must predict whether there exists a citation link between them. IMAGE [21] is a 10-way classification task mapping flattened pixel-sequences from CIFAR-10 to its class. PATHFINDER [23] provides flattened pixel-sequences from an image and the model must decide whether two circles in the image are connected by a dashed line. For this benchmark, we use the PyTorch implementation of LRA provided by the authors of Nyströmformer [43] and adhere to the same train-test splits. Performance in all five tasks is measured using classification accuracy. Methods. We compare SBM-Transformer against the same baselines as with the synthetic task above. For fair comparison, we set all Transformer models to use the default setting used in [43], which fixes 2 layers, 2 attention heads, and 64 embedding dimensions. For SBM-Transformer, we use k = 128 clusters. The output token representations are mean-pooled to obtain the sequence representation for all tasks. More details on the architecture setups can be found in Appendix C. Results. Table 1 shows the test accuracies of each method. Our SBM-Transformer achieves the best overall performance, ranking first in two tasks, and second in one other. SBM-Transformer also outperforms full attention in all five tasks while computing 30% or less attention scores on average, which supports our claim that masked attention with partial attention score computations can be preferred over full attention depending on the task. With respect to the attention mask structure, we find that flexibility of SBM is indeed beneficial, as Reformer struggles in LISTOPS, most likely due to the inability of block-diagonal masks to model hierarchical contexts. Mask Density Regularization. To test if the model can effectively learn under a constraint on the computational cost, we also test the model under a sparsity-based regularizer that discourages excessive use of query-key edges. We penalize each sampled edge by adding to the predictive loss a weighted regularization term λLs, where Ls denotes the average mask density across all attention heads. Table 2 shows the performance of SBM-Transformer across varying regularization weights. Under strong regularization, the model surprisingly retains competitive performance while significantly reducing the average mask density. This indicates that similar local optima are shared across regimes with varying attention density in the loss landscape, and the regularization term is able to drive the model towards finding optimal attention scores with smaller density. Efficiency. Furthermore, we compare computational costs during inference by measuring FLOP count and peak memory usage. For SBM-Transformer, we test the model trained under λ = 10−1. Due to lack of support for sparse tensor operations in existing FLOP-counters, we measure FLOP counts by manually enumerating through each tensor operation. Table 3 shows that SBM-Transformer is comparably efficient across all tasks except for TEXT, where SBM-Transformer showed the largest average mask density. Note that while the cost of other baselines are fixed after initialization, the cost of SBM-Transformer is data-adaptive and can vary input-by-input. Further analysis and qualitative examples demonstrating the input-dependent attention mask densities can be found in Appendix C. Layerwise Diversity in Sparsity. We also compare the densities of masks sampled at each layer of SBM-Transformer during test time to examine whether our model is capable of diversifying sparsity across layers for better performance. Recall that this allows models to gather information in different levels, as seen in pretrained BERT where lower layers focus on the overall content via dense attention while upper layers gather syntactic information with tree-like patterns [10]. For each of the five tasks, we pick two highest-performing models (one for unregularized and another for regularized) for measurement. Figure 5 shows the average layer-wise mask densities of unregularized and regularized SBM-Transformers across different tasks. We find that under no regularization, the two layers can differ by more than 10% in tasks such as LISTOPS and IMAGE. This may be due to the hierarchical and compositional structure of the two tasks. We also find that the variation is relatively low in TEXT with densities around 25%, indicating that the task requires broad attention overall. Lastly, the standard deviation is extremely large in upper layers for PATHFINDER, showing that it samples a wide variety of masks depending on the input. 5.3 General Language Understanding Evaluation (GLUE) To check whether its strong performance demonstrated in LRA extends to the downstream NLP setting as well, we evaluate SBM-Transformer against baselines on the GLUE benchmark [39]. Dataset. We consider four NLP tasks in GLUE [39]. SST-2 [35] consists of movie reviews the model must predict their positive or negative sentiments. For QQP [7], the task is to determine whether one question is a paraphrase of the other given a pair of questions. MNLI [42] consists of sentence pairs, each with a target label indicating whether the two sentences are connected through entailment, contradiction, or neither. QNLI [31] consists of sentence-question pairs and the task is to determine whether the sentence contains an answer to the question. Each task is formulated as sequence classification, and we measure performance by F1 score on the respective validation sets. Methods. Following previous work [43], we arrange a small variant of BERT [13] with 4 layers, 8 attention heads, and 512 embedding dimensions. We replace full attention with each attention module used in previous experiments. For SBM-Transformer, we use k = 128 clusters without sparsity regularization (i.e. λ = 0). Here, we find that adding local attention significantly boosts performance, and thus fix a sliding window of size 64 to SBM-Transformer. We first pretrain each model under the masked language modeling objective for 50 epochs on a corpus with text from English Wikipedia, BookCorpus [50], and RealNews [47]. We then finetune each pretrained model for 5 epochs on the GLUE training sets. More details on the architecture and training setup can be found in Appendix C. Results. Table 4 reports the F1 scores of each method on different NLP tasks. SBMTransformer performs competitively against full attention overall, and outperforms all baselines in SST-2 and QQP. We also find that the fine-tuned SBM-Transformer models use 13.5% dense attention masks on average across all tasks, showing that the model can encode useful information from input sentences effectively under highly sparse attention. 6 Conclusion We propose SBM-Transformer, an efficient Transformer that can data-adaptively choose its attention sparsity between sparse and full attention without the need to explicitly compute the full attention score matrix. Theoretically, we show that our model enjoys the same expressibility as the original Transformer due to the flexibility of the latent SBM. Empirical experiments on LRA and GLUE show that our model performs competitively against previous state-of-the-art efficient Transformers. Nonetheless, there are limitations due to sparse tensor operations being less optimized on GPU kernels. In the LRA experiments, we found that SBM-Transformer can result in longer runtimes compared to dense counterparts while its memory usage is much lower. While previous sparsity-based attention mechanisms with block-sparse attention are much more amenable for GPU computation [46, 8, 3], our work requires an architecture with better workload balancing and acceleration under unstructured sparsity, for which there is ongoing work [41, 49]. We still believe this work is valuable as it is the first approach to induce per-example attention sparsity, allowing the model to adjust its computational cost based on the input. The cost being dependent on the number of edges also allows practitioners to easily impose constraints based on the available computational resources. We hope to see more GPU-friendly tensor operations optimized for finegrained sparsity in the future, at which point the value of this work will increase even further. As we propose a foundational replacement for the scaled dot-product attention module in the Transformer architecture, we do not expect any immediate negative societal impact due to this work. Acknowledgments and Disclosure of Funding We would like to thank Kun Dong for the insightful comments. This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00926, 2022-0-00959, 2021-0-02068, and 2019-0-00075).
1. What is the focus and contribution of the paper on adaptive attention module? 2. What are the strengths of the proposed approach, particularly in its originality and quality? 3. What are the weaknesses of the paper regarding experiment convincingness and lack of information on runtime and comparison to other methods? 4. Do you have any concerns or suggestions regarding the paper's clarity and significance? 5. Can the authors provide more information on the extra computational cost and comparison to other efficient attention mechanisms?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors propose to adaptively learn the sparsity of attention module through modeling the mask matrix as a bipartite graph generated by SBM. Specifically, they discuss how they can apply the new attention mechanism to both forward and backward propagation. They use Eqn. (5), (6), (7), and (8) to model the mask matrix in forward propagation, which introduce a trainable cluster embedding matrix C and two MLPs to decide the node membership of tokens in Q , K . They "incorporate a Straight-Through Estimator (STE) to pass the gradient beyond the discrete sampling", which works as if M ⊙ E M is used in forward propagation. They also show the new attention mechanism is a universal approximator, and evaluate their proposed methods on Long-range-arena (LRA) benchmarks. Strengths And Weaknesses Originality The idea to model the sparsity of attention by SBM is new, though formally similar to the classical idea in pruning that directly model the whole mask matrix. Quality Pros: The author propose a new data-adaptive sparse attention mechanism. Cons: The experiments on LRA is not that convincing nowadays given the recent paper "Efficiently Modeling Long Sequences with Structured State Spaces" (ICLR 2022), which shows current Transformers are way worse than their state-space-based method. The effectiveness of the experimental results would be in doubt whether in another setting/benchmarks which Transformers are more good at, the conclusion would still hold. The actual runtime of the new method is not reported and compared to exisitng methods. Clarity The clarity of this paper is awesome. There is a corner concern: to my knowledge, "SBM" should be short for "stochastic block model" rather than stochastic blockmodel. Also, the statement of Theorem 1 is not very informative. It would be better if the content in Appendix A that Eqn. (10) can hold with O ( n ) connections can be moved to the main paper. Significance The problem studied is important: how to sparsify and therefore accelerate attention. However, the GPU-unfriendly sparse matrix multiplication makes the significance limited. Questions Could the authors show the extra time and space cost of adding the SBM mechanism? How is that compared to another efficient attention mechanism? Limitations N/A
NIPS
Title Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost Abstract To overcome the quadratic cost of self-attention, recent works have proposed various sparse attention modules, most of which fall under one of two groups: 1) sparse attention under a hand-crafted patterns and 2) full attention followed by a sparse variant of softmax such as α-entmax. Unfortunately, the first group lacks adaptability to data while the second still requires quadratic cost in training. In this work, we propose SBM-Transformer, a model that resolves both problems by endowing each attention head with a mixed-membership Stochastic Block Model (SBM). Then, each attention head data-adaptively samples a bipartite graph, the adjacency of which is used as an attention mask for each input. During backpropagation, a straight-through estimator is used to flow gradients beyond the discrete sampling step and adjust the probabilities of sampled edges based on the predictive loss. The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input. By assessing the distribution of graphs, we theoretically show that SBM-Transformer is a universal approximator for arbitrary sequence-to-sequence functions in expectation. Empirical evaluations under the LRA and GLUE benchmarks demonstrate that our model outperforms previous efficient variants as well as the original Transformer with full attention. Our implementation can be found in https://github.com/sc782/SBM-Transformer. 1 Introduction The Transformer [38] architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation [28], image classification [14], and protein language modeling [32]. Its key strength stems from the multi-head attention module, where a so-called attention score matrix computes how contextually important one token is to another for all possible token pairs. Each Transformer layer simultaneously pools the token representations based on the attention scores, eventually returning contextualized features without sequentially traversing through the input sequence as its recurrent neural network-based predecessors [16]. A well-known drawback of the original Transformer is its high computational cost in time and memory that increases quadratically with sequence length. This is due to the full pairwise computation of attention scores, which prohibits applying it in tasks involving long-range dependencies such as document summarization [17] or high-resolution image processing [48]. Many works have thus focused on developing more efficient alternatives by exploiting fixed or learnable attention sparsity patterns [8, 46, 20, 12], low-rank approximations [40, 43], or kernelized attention modules [19, 9]. Even though the efficient alternatives hold theoretical expressibility guarantees [45], they are far from sufficient, still failing to convince practitioners to replace the original Transformer. We believe this is mostly due to their lack of adaptability. They apply the same modifications to unanimously sparsify all the attention modules across layers, without considering the tasks at hand. Such strategy 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 imposes inductive bias too strongly and often leads to sub-optimal cost vs. performance trade-offs in downstream tasks [27]. In this work, we argue that to retain the utmost potential of Transformers, each attention module should have the ability to flexibly choose between sparse and full attention. This is especially evident when considering many state-of-the-art systems suggest the need for a mixture of dense and sparse attention layers. For example, a qualitative analysis on pretrained BERT showed that lower layers exhibit broad dense attention while upper layers perform focused sparse attention [10]. In the case of GPT-3 [6], the Transformer blocks are manually arranged to alternate between dense and sparse attention. To contribute to the efficient Transformers lineage, we propose SBM-Transformer, capable of adjusting its attention sparsity data-adaptively based without fully computing the attention score matrix (Figure 1). Leveraging a mixed-membership Stochastic Block Model (SBM) [2], each attention head samples a bipartite graph connecting queries to keys. Then, the adjacency of the sampled graph is used as an attention mask so that only attention scores corresponding to sampled edges are computed. The overall computational cost is linear in the number of edges, which can range from linear to quadratic in sequence length depending on the data and task under concern. Each attention head is equipped with its own underlying SBM, enabling the model to diversify the attention sparsity across heads and layers. By incorporating a straight-through estimator [4] in the discrete graph-sampling step, SBM-Transformer enjoys end-to-end differentiability and can find the proper attention sparsity based solely upon minimizing the predictive loss. The model can also easily be further regularized by penalizing the number of sampled edges, which results in a lighter model using less computational resources during inference. To the best of our knowledge, our method is the first Transformer architecture that can data-adaptively choose between linear to full attention with respective computational costs. To summarize, our main contributions are as follows: • We present SBM-Transformer, a novel Transformer of which each attention head can adaptively adjust its attention sparsity as well as computational cost based on the input data. • To demonstrate the benefit of this flexibility, we theoretically prove that SBM-Transformer retains universal approximability, and also stress-test the model under a synthetic task where full attention is required to achieve 100% accuracy. • Evaluations on LRA and GLUE benchmarks show that SBM-Transformer outperforms previous efficient Transformer models as well as the vanilla Transformer with dense attention. 2 Related Work In this section we discuss previous efficient Transformer variants and several works similar to ours with respect to adaptively learning sparse attention patterns. We also review several works on SBMs. Efficient Transformers. Many efficient Transformers tackle to reduce the quadratic cost of multihead attention with different approaches. While we discuss only a handful of representative approaches, a much more comprehensive survey can be found in [37]. The Linear Transformer [19] achieves linear complexity by replacing the softmax with a low-rank kernelized function. Linformer [40] and Nyströmformer [43] use a similar approach by low-rank approximating the attention score matrix. Performer [9] uses positive orthogonal random features to approximate the softmax kernel. Reformer [20] gathers similar tokens together through locality-sensitive hashing (LSH) and performs attention amongst tokens within the same bucket. Of all methods above, our method is most similar to Reformer, in the sense that we adaptively assign queries and keys into clusters and form a low-rank sparse attention pattern. However, our method performs soft-clustering with much less structural constraints, allowing each attention head to represent a wider variety of dependency structure and to adjust its sparsity towards full attention if needed. Adaptive Sparsity. With respect to flexible training between sparse and dense attention, there exist some works that parameterize how sparse the attention pattern should be based on the input. The Adaptive Sparse Transformer [11] proposed replacing the usual softmax activation with α-entmax, in which the α parameter can be differentiably trained to adjust the activation between softmax and sparsemax activation [25]. SparseBERT [34] uses a differentiable masking technique where each attention mask is sampled from a Gumbel-sigmoid distribution using data-independent mask probability parameters. While these methods possess the flexibility to adjust between sparse and full attention based on data, they still require full computation of the attention score matrix before sparsification, and hence are unable to leverage the learned sparsity towards better model efficiency. To the best of our knowledge, ours is the first work to be able to adaptively tune its attention sparsity between sparse to full attention without requiring the explicit computation of the attention score matrix, thereby avoiding quadratic cost when possible. Stochastic Block Models. The Stochastic Block Model (SBM) is a generative model that encodes the latent structure of graphs by grouping nodes into clusters. By modeling the cluster-membership of each node as well as inter-cluster relationships, SBMs can represent a wide variety of graph structures, which is a feature especially useful for generating new graphs or predicting missing edges in noisy data [1]. The standard SBM assigns each node to a single cluster, and the probability of an edge between two nodes strictly depends on the corresponding clusters. Several structural extensions include overlapping SBM [22] and mixed-membership SBM [2], which allow each node to be assigned to multiple clusters. The underlying SBM used by our framework mostly resembles these two variants, while the edge probability is modeled by a nonlinear function of two node embeddings rather than a bilinear one. There exist many other extensions including degree-corrected SBM [18] for multi-graphs and hierarchical SBM [29] for multiplex-graphs. Further details can be found in a recent survey [15]. 3 Preliminaries: Sparse Transformers We first introduce the full attention mechanism used in the original Transformer [38] as well as masked attention which will serve as a backbone of our approach. 3.1 Full Attention In vanilla Transformer [38], each attention head takes a sequence of token features as input X ∈ Rn×d where n is the sequence length and d the embedding dimension. Weight parameters WQ,WK ∈ Rd×dh and W V ∈ Rd×dh with head-dimension dh first maps the input features X into query Q, key K, and value V , respectively. Then, the attention score matrix is computed with scaled dot-product of queries and keys followed by row-wise softmax activation σ(·). Note that explicit computation of this matrix is the main bottleneck of full attention, incurring O(n2) asymptotic cost in both time and memory. The value features V are then pooled based on the attention scores, returning the output token representations. Altogether, the operation performed by each attention head can be written as Q = XWQ, K = XWK , V = XW V (1) Attn(X) = σ ( QKT√ dh ) V . (2) 3.2 Masked Attention One way to remove the quadratic bottleneck from the attention score matrix is to apply a binary mask M ∈ {0, 1}n×n and compute the scaled dot-products QiKTj / √ dh only if Mij = 1. In presence of an attention mask, the operation is modified to Attnmask(X,M) = σM ( M ⊙ QK T √ dh ) V (3) σM (A)ij := exp(Aij)∑ k∈{k′|Mik′=1} exp(Aik) if Mij = 1 0 otherwise (4) where ⊙ indicates entry-wise multiplication. Note that the masked-softmax σM (·) operator only computes unmasked terms, ensuring that each (i, j)-th attention score survives as nonzero if and only if Mij = 1. This is thus equivalent to filling in the (i, j)-th attention score with −∞ if Mij = 0, then applying the standard softmax operator. Most sparsity-based efficient Transformers fall under this formulation, while using different methods to either manually fix or learn the mask M . For instance, local attention [8, 3, 46] with a sliding window sets Mij = 1 if |i− j| < c for some context window size c while Reformer [20] sets Mij = 1 if Qi and Kj are hashed into the same bucket. 4 Our Method: SBM-Transformer Here we discuss the details of SBM-Transformer (Figure 2). We first illustrate the forward step of our attention module and how the underlying SBM [2] of each head, from which we sample our attention masks, is parameterized by the input tensors. We then discuss how the model enables end-to-end differentiability despite the discrete graph sampling step. 4.1 Forward step with the Stochastic Block Model In our framework, we view the attention mask M as an adjacency matrix of a bipartite graph that connects queries to keys, and let each attention head sample an adjacency matrix that best represents the contextual dependencies amongst input tokens. In order to efficiently sample adjacency matrices while avoiding the quadratic cost, the distribution of graphs must first be parameterized with a sub-quadratic number of latent variables. Stochastic Block Models fit perfectly for our purpose as it models graphs that are low-rank structured with k latent clusters, allowing full parameterization using O(nk) memory. More concretely, the SBM distribution is defined by two nonnegative nodeto-cluster memberships Y ,Z ∈ Rn×k+ and a so-called block matrix B ∈ Rk×k+ that stores the inter-cluster connection probabilities. The probability of node i being connected to node j is computed as p(i, j) = YiBZTj . Equivalently, the expectation of the adjacency matrix sampled from A ∼ SBM(Y ,B,Z) can be written as E[A] = Y BZT . For proper parameterization of the SBM, we must infer the nonnegative node-memberships and block matrix from the queries and keys. To do so, we equip each attention head a 2-layer MLPdh→dh with ReLU activation, and a set of k trainable cluster-embeddings C ∈ Rk×dh . First, our model computes the block matrix Ŝ ∈ Rk×k+ by taking dot products amongst cluster-embeddings C followed by a 2-dimensional softmax activation. The node embeddings are obtained by processing each query and key through the MLPdh→dh , mapping token representations into the node representation space. The memberships of query and key nodes, which we denote by Q̂ and K̂, are then inferred by taking dot products of node and cluster embeddings, followed by a sigmoid function. The block matrix Ŝ, query node-memberships Q̂, and key node-memberships K̂ altogether provide a well-defined parameterization for the SBM. Thus, a bipartite graph adjacency M ∈ {0, 1}n×m can be sampled from M ∼ SBM(Q̂, Ŝ, K̂) with expectation E[M ] = Q̂ŜK̂T : the probability of connecting query Qi to key Kj equals p(i, j) = Q̂iŜK̂Tj . Formally, the sampling procedure can be written as Algorithm 1: fastRG(Y ,B,Z)[33] Input :Y ∈ Rn×k+ , B ∈ Rk×k+ , Z ∈ Rn×k+ Output :M ∈ {0, 1}n×n with E[M ] = Y BZT 1 Compute diagonal matrices DY = (diag(1Y ))−1 and DZ = (diag(1Z))−1 2 Column-normalize Y = Y D−1Y and Z = ZD −1 Z 3 Compute B = DY BDZ 4 Sample number of edges m ∼ Poisson(1B1T ) 5 Initialize M = 0 6 for i = 1 : m do 7 Sample (U, V ) from {1, . . . , k} × {1, . . . , k} with Pr(U = u, V = v) ∝ Buv 8 Sample source I from {1, . . . , n} with Pr(I = i) = Y iU . 9 Sample destination J from {1, . . . , n} with Pr(J = j) = ZjV 10 Set MIJ = 1. 11 end Ŝ = softmax(CCT ) (5) Q̂ = sigmoid(MLPdh→dh(Q)C T ) (6) K̂ = sigmoid(MLPdh→dh(K)C T ) (7) M ∼ SBM(Q̂, Ŝ, K̂) (8) For the last sampling step, we incorporate a fast random graph sampling algorithm fastRG (Alg. 1, [33]) that can sample graphs from a SBM in time and memory asymptotically linear in the number of edges. One advantage of fastRG is that each edge can be sampled in parallel, allowing high efficiency with the help of multiprocessing. A more significant feature of the method is that the number of edges, which determines the overall cost, is sampled from a Poisson distribution with input-dependent mean (Line 4). Thus, the model can dynamically adjust its computational cost between linear and quadratic in sequence length based on the data. Figure 3 shows example placements of nodes and clusters on the dh-dimensional space to show how the sparse structure is determined. If all nodes and clusters are gathered closely, then all entries in Q̂ and K̂ become close to 1, resulting in p(i, j) ≈ 1 for all i, j and hence a dense M . If clusters are well-separated but each surrounded by some set of nodes, Ŝ becomes close to diagonal while each row in Q̂ and K̂ is close to a one-hot vector indicating the cluster nearby. Such setting leads to a block diagonal mask similar to LSH bucketing of Reformer [20]. Lastly, if all clusters are far apart from the nodes, both Q̂ and K̂ approximately equal zero, zeroing out all the edge probabilities. 4.2 Backward Step with Straight-Through Estimator The graph sampling procedure is naturally a discrete operation. Thus, naive backpropagation cannot learn the proper parameterization for the SBM that minimizes the predictive loss. To cope with this non-differentiability, we incorporate a Straight-Through Estimator (STE) [4] to pass the gradient beyond the discrete sampling step. The STE enables providing the gradient ∂L/∂Mij to the probability for each sampled edge (i, j) (Eqn. 9). It works as if we had used a continuous mask M ⊙ E[M ] that stores the probability of each sampled edge instead of the binary mask M during forward propagation. This way, the probabilities of sampled edges can be learned end-to-end: the gradients provide information on whether each sampled edge was useful or not for prediction. ∂L ∂pij := ∂L ∂Mij = ∂L ∂Aij · QiK T j√ dh if Mij = 1 0 otherwise where A := M ⊙ QK T √ dh (9) (b) Block-diagonal(a) Dense (c) Sparse Random Edge Exploration. While this approach enables backpropagation in the same O(m) cost as in the forward step, this comes at the expense of not being able to propagate information through edges that were not sampled. This can be problematic when an edge probability accidentally collapses to zero, after which the edge becomes unlikely to ever be sampled even when it may be useful for the prediction task at hand. Therefore, we add a small perturbation δ > 0 to each edge probability pij , allowing the model to explore new edges and resuscitate their sampling probabilities if necessary. We find that a δ as small as 0.01 significantly helps in practice, and thus use this edge exploration scheme during training for our experiments. Wouldn’t the model always prefer full attention? Note that the gradient ∂L/∂pij can be positive, which suppresses the probability of edge (i, j). At first, it may seem counter-intuitive why the model would ever limit itself to using fewer edges during training without any sparsity-based regularizations. One explanation is that masked attention provides an easy way to reduce attention scores under finite head dimensions. Under full attention, it is known that the representational space of attention score matrices is limited by the head dimension and softmax activation [5]. This limitation inevitably introduces unwanted noise in the attention scores especially when working with long sequences. In SBM-Transformer, however, the structural sparsity in masked attention introduces another dimension that induces a larger space of row-stochastic matrices (full attention is a special case of masked attention where Mij = 1 for all i, j). Therefore, it is reasonable that the model may encourage sparsity to leverage the additional expressiveness assuming the loss landscape has local optima within the sparse attention regime. Our experiments on the LRA benchmark show that this is indeed the case, as our SBM-Transformer converges to an average attention sparsity of 20% to 30% while outperforming Transformer with full attention. We also show in the experiment that we can easily incorporate additional regularization that further encourages sparse attention masks. 4.3 SBM-Transformer is a Universal Approximator Leveraging previous work on the theoretical expressiveness of sparse attention [45, 46], we show that SBM-Transformer with a small modification1 retains the same level of expressibility as full attention. Specifically, we show that the low-rank structure of the underlying SBMs does not degrade the expressive power of Transformer, and that SBM-Transformer can universally approximate arbitrary functions with O(n) connections. For brevity, we provide a rough overview of the proof and defer further details to Appendix A. Theorem 1. Let f ∈ F be class of continuous sequence-to-sequence functions. T h,r,mSBM denote the class of SBM-Transformers with h attention heads, m head dimension, and r dimensions in hidden layers. Then for any ϵ > 0 and 1 ≤ p < ∞, there exists a function g ∈ T h,m,rSBM such that∫ D ∥f(X)− E[g(X)]∥ppdX ≤ ϵ (10) 1Here we consider a variant of SBM-Transformer where self-loops are added manually (i.e. Mii = 1 for all i). While this is useful in theoretical analysis, we find that not having self-loops slightly helps in empirical performance and hence omit self-loops for the main experiments. According to the main theorem of Yun et al. (2020) [44], SBM-Transformer achieves universal approximability if 1) each node attends to itself, 2) the aggregation of all attention patterns contains a Hamiltonian path, and 3) there exists a path between all node pairs. While the first condition is trivially true due to our modification, the other two conditions require careful choice of three SBMs. Here we first parameterize one SBM to hard-assign tokens into k equally-sized clusters, inducing a block-diagonal attention pattern. The other two SBMs are parameterized such that the two graphs together form a star graph with k global relay tokens. Combining the three attention patterns lead to a parameterization of SBM-Transformer that satisfies all three conditions, hence proving the theorem. 5 Experiments For empirical evaluations, we first use a synthetic task to show that our model is flexible enough to learn towards full attention when needed in contrast to previous works. We then experiment on Long Range Arena (LRA) [36], a benchmark widely used to assess the capacity of efficient Transformers in learning long-range contexts across different modalities. Lastly, we show results on the GLUE benchmark [39] to assess the performance of SBM-Transformer in a downstream NLP setting. All experiments were run on a remote GCP server equipped with 16 NVIDIA A100 Tensor Core GPUs. 5.1 Synthetic Task: Finding Repeated Tokens Dataset. We formulate a token-level binary classification task as follows: each input sequence consists of N integers, each of which is uniformly sampled from {1, 2, . . . , N}. We use N = 256 in our setup. The prediction target is a sequence of equal length, where each token is labeled 1 if there exists a duplicate somewhere within the sequence, and 0 otherwise. Below is a simple example with N = 8 that illustrates the task. We measure the performance of models via binary cross-entropy loss. Input: 1 4 3 7 3 2 3 1⇒ Target: 1 0 1 0 1 0 1 1 Methods. For this task, we compare SBM-Transformer with k = 128 clusters against various efficient Transformers: Linear Transformer [19], Linformer [40], Reformer [20], Performer [9], and Nyströmformer [43]. Across all methods, we use a single-layer and single-head architecture with 32 hidden dimensions. Note that due to this constrained setting, the sole head must perform full attention to compare each token to all the others in order to attain 100% accuracy. All models are trained for 2000 epochs where a new batch of sequences is sampled on-the-fly at each epoch. We use a batch size of 256 and learning rate of 1e-3. Results. Figure 4 shows the training loss curves of each baseline method as well as SBMTransformer. Full attention quickly converges to 100% accuracy, which is expected as it computes all possible pairwise interactions by default. Other models that apply low-rank or kernelized attention fail to achieve the same level of accuracy, due to limited expressibility under the constrained setting. Though SBM-Transformer converges more slowly compared to full-attention, it demonstrates the ability to drive itself towards full-attention, eventually attaining zero loss. 5.2 Long Range Arena (LRA) To demonstrate that the flexible inductive bias of SBM-Transformer is effective for modeling longrange dependencies, we test SBM-Transformer against previous work on the LRA benchmark. We also test how the performance is affected with respect to applying a sparsity-based regularizer. Dataset. LRA [36] consists of five different testbeds with varying modalities: LISTOPS [26] is a 10-way classification task to map a sequence of single-digit numbers and 4 different set operations, to its corresponding solution. TEXT [24] is a binary classification task where byte-level IMDB movie reviews must be classified into one of positive or negative sentiments. RETRIEVAL [30] is also a char-level binary classification task, where two sequences from ACL Anthology papers are given as input, and the model must predict whether there exists a citation link between them. IMAGE [21] is a 10-way classification task mapping flattened pixel-sequences from CIFAR-10 to its class. PATHFINDER [23] provides flattened pixel-sequences from an image and the model must decide whether two circles in the image are connected by a dashed line. For this benchmark, we use the PyTorch implementation of LRA provided by the authors of Nyströmformer [43] and adhere to the same train-test splits. Performance in all five tasks is measured using classification accuracy. Methods. We compare SBM-Transformer against the same baselines as with the synthetic task above. For fair comparison, we set all Transformer models to use the default setting used in [43], which fixes 2 layers, 2 attention heads, and 64 embedding dimensions. For SBM-Transformer, we use k = 128 clusters. The output token representations are mean-pooled to obtain the sequence representation for all tasks. More details on the architecture setups can be found in Appendix C. Results. Table 1 shows the test accuracies of each method. Our SBM-Transformer achieves the best overall performance, ranking first in two tasks, and second in one other. SBM-Transformer also outperforms full attention in all five tasks while computing 30% or less attention scores on average, which supports our claim that masked attention with partial attention score computations can be preferred over full attention depending on the task. With respect to the attention mask structure, we find that flexibility of SBM is indeed beneficial, as Reformer struggles in LISTOPS, most likely due to the inability of block-diagonal masks to model hierarchical contexts. Mask Density Regularization. To test if the model can effectively learn under a constraint on the computational cost, we also test the model under a sparsity-based regularizer that discourages excessive use of query-key edges. We penalize each sampled edge by adding to the predictive loss a weighted regularization term λLs, where Ls denotes the average mask density across all attention heads. Table 2 shows the performance of SBM-Transformer across varying regularization weights. Under strong regularization, the model surprisingly retains competitive performance while significantly reducing the average mask density. This indicates that similar local optima are shared across regimes with varying attention density in the loss landscape, and the regularization term is able to drive the model towards finding optimal attention scores with smaller density. Efficiency. Furthermore, we compare computational costs during inference by measuring FLOP count and peak memory usage. For SBM-Transformer, we test the model trained under λ = 10−1. Due to lack of support for sparse tensor operations in existing FLOP-counters, we measure FLOP counts by manually enumerating through each tensor operation. Table 3 shows that SBM-Transformer is comparably efficient across all tasks except for TEXT, where SBM-Transformer showed the largest average mask density. Note that while the cost of other baselines are fixed after initialization, the cost of SBM-Transformer is data-adaptive and can vary input-by-input. Further analysis and qualitative examples demonstrating the input-dependent attention mask densities can be found in Appendix C. Layerwise Diversity in Sparsity. We also compare the densities of masks sampled at each layer of SBM-Transformer during test time to examine whether our model is capable of diversifying sparsity across layers for better performance. Recall that this allows models to gather information in different levels, as seen in pretrained BERT where lower layers focus on the overall content via dense attention while upper layers gather syntactic information with tree-like patterns [10]. For each of the five tasks, we pick two highest-performing models (one for unregularized and another for regularized) for measurement. Figure 5 shows the average layer-wise mask densities of unregularized and regularized SBM-Transformers across different tasks. We find that under no regularization, the two layers can differ by more than 10% in tasks such as LISTOPS and IMAGE. This may be due to the hierarchical and compositional structure of the two tasks. We also find that the variation is relatively low in TEXT with densities around 25%, indicating that the task requires broad attention overall. Lastly, the standard deviation is extremely large in upper layers for PATHFINDER, showing that it samples a wide variety of masks depending on the input. 5.3 General Language Understanding Evaluation (GLUE) To check whether its strong performance demonstrated in LRA extends to the downstream NLP setting as well, we evaluate SBM-Transformer against baselines on the GLUE benchmark [39]. Dataset. We consider four NLP tasks in GLUE [39]. SST-2 [35] consists of movie reviews the model must predict their positive or negative sentiments. For QQP [7], the task is to determine whether one question is a paraphrase of the other given a pair of questions. MNLI [42] consists of sentence pairs, each with a target label indicating whether the two sentences are connected through entailment, contradiction, or neither. QNLI [31] consists of sentence-question pairs and the task is to determine whether the sentence contains an answer to the question. Each task is formulated as sequence classification, and we measure performance by F1 score on the respective validation sets. Methods. Following previous work [43], we arrange a small variant of BERT [13] with 4 layers, 8 attention heads, and 512 embedding dimensions. We replace full attention with each attention module used in previous experiments. For SBM-Transformer, we use k = 128 clusters without sparsity regularization (i.e. λ = 0). Here, we find that adding local attention significantly boosts performance, and thus fix a sliding window of size 64 to SBM-Transformer. We first pretrain each model under the masked language modeling objective for 50 epochs on a corpus with text from English Wikipedia, BookCorpus [50], and RealNews [47]. We then finetune each pretrained model for 5 epochs on the GLUE training sets. More details on the architecture and training setup can be found in Appendix C. Results. Table 4 reports the F1 scores of each method on different NLP tasks. SBMTransformer performs competitively against full attention overall, and outperforms all baselines in SST-2 and QQP. We also find that the fine-tuned SBM-Transformer models use 13.5% dense attention masks on average across all tasks, showing that the model can encode useful information from input sentences effectively under highly sparse attention. 6 Conclusion We propose SBM-Transformer, an efficient Transformer that can data-adaptively choose its attention sparsity between sparse and full attention without the need to explicitly compute the full attention score matrix. Theoretically, we show that our model enjoys the same expressibility as the original Transformer due to the flexibility of the latent SBM. Empirical experiments on LRA and GLUE show that our model performs competitively against previous state-of-the-art efficient Transformers. Nonetheless, there are limitations due to sparse tensor operations being less optimized on GPU kernels. In the LRA experiments, we found that SBM-Transformer can result in longer runtimes compared to dense counterparts while its memory usage is much lower. While previous sparsity-based attention mechanisms with block-sparse attention are much more amenable for GPU computation [46, 8, 3], our work requires an architecture with better workload balancing and acceleration under unstructured sparsity, for which there is ongoing work [41, 49]. We still believe this work is valuable as it is the first approach to induce per-example attention sparsity, allowing the model to adjust its computational cost based on the input. The cost being dependent on the number of edges also allows practitioners to easily impose constraints based on the available computational resources. We hope to see more GPU-friendly tensor operations optimized for finegrained sparsity in the future, at which point the value of this work will increase even further. As we propose a foundational replacement for the scaled dot-product attention module in the Transformer architecture, we do not expect any immediate negative societal impact due to this work. Acknowledgments and Disclosure of Funding We would like to thank Kun Dong for the insightful comments. This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00926, 2022-0-00959, 2021-0-02068, and 2019-0-00075).
1. What is the main contribution of the paper regarding the Transformer variant? 2. What are the strengths and weaknesses of the proposed approach, particularly in its originality, quality, empirical evaluation, clarity, significance, and limitations? 3. How does the reviewer assess the paper's ability to push the accuracy-cost Pareto frontier versus existing efficient Transformer architectures? 4. Is it possible to replace the nondeterministic prediction during inference with a deterministic operation? 5. How does the reviewer view the paper's significance and practicality for practitioners, considering the current state of hardware support for unstructured sparsity?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a Transformer variant where the self-attention matrices are masked using random adjacency matrices generated using a stochastic block model (SBM). The goal here is to reduce the computational cost of producing each N x N self-attention matrix. In the forward pass, the proposed method computes SBM parameters from the query and key embeddings using an MLP and samples a self-attention mask using a fast random graph sampling method. In the backward pass, gradients are passed through this discrete sampling step using a straight-through estimator. The authors show that the proposed SBM-Transformer is a universal approximator in the setting considered by Yun et al. (2020) and Zaheer et al. (2020). Empirically, the authors demonstrate that the SBM-Transformer (1) is able to emulate a standard Transformer with dense attention on a synthetic task and (2) outperforms several efficient Transformer variants on average on the Long Range Arena benchmark. Strengths And Weaknesses Originality As noted in the paper, the use of sparse attention masks to reduce the computational cost of the self-attention operation has previously been explored in several existing papers (e.g., axial attention (Ho et al. 2019), BigBird (Zaheer et al., 2020)). The main methodological novelty here is the use of a stochastic block model to generate a random attention mask in each forward pass. To my knowledge, this approach has not been studied in prior work. Quality Theoretical results. The paper establishes a universal approximation property for the proposed transformer variant (Theorem 1). This result is helpful as a sanity check that the proposed method is a reasonable approach. The overall technical contribution here is relatively minor since the proof of the result is primarily an application of existing machinery from Yun et al. (2020) and Zaheer et al. (2020). A minor weakness of the analysis is that it makes use of a modified version of the SBM-Transformer architecture relative to that used in the empirical evaluation -- in particular, the analysis requires that M_ii = 1, unlike in the version used to obtain the results in Table 1. Empirical evaluation. A strength of the paper is the result that the proposed SBM-Transformer architecture outperforms the baseline efficient Transformer architectures on average over the LRA benchmark tasks (Table 1). However, a significant weakness of this comparison is that it does not appropriately account for the accuracy-cost tradeoff corresponding to each of the evaluated methods. For instance, it is not obvious from the paper whether the SBM-Transformer outperforms the given baselines when controlling for a measure of computational cost such as the per-example FLOP count during inference. Such controls are important since the SBM-Transformer introduces additional operators (such as a 2-layer MLP used for computing the SBM parameters), the costs of which will have to be offset by the increased sparsity of the self-attention matrix. Clarity The presentation was generally clear and easy to follow. The specification of the masked attention computation in Section 3.2 is problematic since simply masking the attention logits to 0 is not equivalent to zeroing-out the attention scores after the softmax operator. Significance As noted in Section 6, the unstructured sparsity of the resulting attention maps in the SBM-Transformer formulation is not amenable to fast matrix multiplication on current GPU hardware. Thus, as a practical matter, the proposed method requires longer wall clock times for inference, which consequently limits the significance of the work for practitioners. The authors argue that the method (1) "enables multi-head attention with long sequences", and that (2) practitioners can tune the computational cost of inference by imposing constraints on the number of edges to sample. I don't find these arguments to be convincing as is -- in particular, claim (1) requires additional justification in light of (a) existing efficient transformer architectures that incur subquadratic computation, and (b) techniques such as memory-efficient attention [1] that require subquadratic memory. As for claim (2), the merit of this additional flexibility (e.g., in terms of accuracy vs. cost) should be evaluated empirically. [1] Rabe & Staats, 2021. Self-attention Does Not Need O(n^2) Memory. Questions As discussed in the previous section, does the SBM-Transformer push out the accuracy-cost Pareto frontier vs. existing efficient Transformer architectures? Nondeterministic prediction at inference time is typically considered to be undesirable by practitioners. However, the setup as described requires random sampling under the SBM even during inference. Can this sampling step by replaced with a deterministic operation? Limitations The authors highlight the limitation of their work with regard to hardware support for unstructured sparsity in Section 6.
NIPS
Title Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost Abstract To overcome the quadratic cost of self-attention, recent works have proposed various sparse attention modules, most of which fall under one of two groups: 1) sparse attention under a hand-crafted patterns and 2) full attention followed by a sparse variant of softmax such as α-entmax. Unfortunately, the first group lacks adaptability to data while the second still requires quadratic cost in training. In this work, we propose SBM-Transformer, a model that resolves both problems by endowing each attention head with a mixed-membership Stochastic Block Model (SBM). Then, each attention head data-adaptively samples a bipartite graph, the adjacency of which is used as an attention mask for each input. During backpropagation, a straight-through estimator is used to flow gradients beyond the discrete sampling step and adjust the probabilities of sampled edges based on the predictive loss. The forward and backward cost are thus linear to the number of edges, which each attention head can also choose flexibly based on the input. By assessing the distribution of graphs, we theoretically show that SBM-Transformer is a universal approximator for arbitrary sequence-to-sequence functions in expectation. Empirical evaluations under the LRA and GLUE benchmarks demonstrate that our model outperforms previous efficient variants as well as the original Transformer with full attention. Our implementation can be found in https://github.com/sc782/SBM-Transformer. 1 Introduction The Transformer [38] architecture has been the go-to method for encoding sequential data, due to its superior performance in various tasks such as machine translation [28], image classification [14], and protein language modeling [32]. Its key strength stems from the multi-head attention module, where a so-called attention score matrix computes how contextually important one token is to another for all possible token pairs. Each Transformer layer simultaneously pools the token representations based on the attention scores, eventually returning contextualized features without sequentially traversing through the input sequence as its recurrent neural network-based predecessors [16]. A well-known drawback of the original Transformer is its high computational cost in time and memory that increases quadratically with sequence length. This is due to the full pairwise computation of attention scores, which prohibits applying it in tasks involving long-range dependencies such as document summarization [17] or high-resolution image processing [48]. Many works have thus focused on developing more efficient alternatives by exploiting fixed or learnable attention sparsity patterns [8, 46, 20, 12], low-rank approximations [40, 43], or kernelized attention modules [19, 9]. Even though the efficient alternatives hold theoretical expressibility guarantees [45], they are far from sufficient, still failing to convince practitioners to replace the original Transformer. We believe this is mostly due to their lack of adaptability. They apply the same modifications to unanimously sparsify all the attention modules across layers, without considering the tasks at hand. Such strategy 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 1 imposes inductive bias too strongly and often leads to sub-optimal cost vs. performance trade-offs in downstream tasks [27]. In this work, we argue that to retain the utmost potential of Transformers, each attention module should have the ability to flexibly choose between sparse and full attention. This is especially evident when considering many state-of-the-art systems suggest the need for a mixture of dense and sparse attention layers. For example, a qualitative analysis on pretrained BERT showed that lower layers exhibit broad dense attention while upper layers perform focused sparse attention [10]. In the case of GPT-3 [6], the Transformer blocks are manually arranged to alternate between dense and sparse attention. To contribute to the efficient Transformers lineage, we propose SBM-Transformer, capable of adjusting its attention sparsity data-adaptively based without fully computing the attention score matrix (Figure 1). Leveraging a mixed-membership Stochastic Block Model (SBM) [2], each attention head samples a bipartite graph connecting queries to keys. Then, the adjacency of the sampled graph is used as an attention mask so that only attention scores corresponding to sampled edges are computed. The overall computational cost is linear in the number of edges, which can range from linear to quadratic in sequence length depending on the data and task under concern. Each attention head is equipped with its own underlying SBM, enabling the model to diversify the attention sparsity across heads and layers. By incorporating a straight-through estimator [4] in the discrete graph-sampling step, SBM-Transformer enjoys end-to-end differentiability and can find the proper attention sparsity based solely upon minimizing the predictive loss. The model can also easily be further regularized by penalizing the number of sampled edges, which results in a lighter model using less computational resources during inference. To the best of our knowledge, our method is the first Transformer architecture that can data-adaptively choose between linear to full attention with respective computational costs. To summarize, our main contributions are as follows: • We present SBM-Transformer, a novel Transformer of which each attention head can adaptively adjust its attention sparsity as well as computational cost based on the input data. • To demonstrate the benefit of this flexibility, we theoretically prove that SBM-Transformer retains universal approximability, and also stress-test the model under a synthetic task where full attention is required to achieve 100% accuracy. • Evaluations on LRA and GLUE benchmarks show that SBM-Transformer outperforms previous efficient Transformer models as well as the vanilla Transformer with dense attention. 2 Related Work In this section we discuss previous efficient Transformer variants and several works similar to ours with respect to adaptively learning sparse attention patterns. We also review several works on SBMs. Efficient Transformers. Many efficient Transformers tackle to reduce the quadratic cost of multihead attention with different approaches. While we discuss only a handful of representative approaches, a much more comprehensive survey can be found in [37]. The Linear Transformer [19] achieves linear complexity by replacing the softmax with a low-rank kernelized function. Linformer [40] and Nyströmformer [43] use a similar approach by low-rank approximating the attention score matrix. Performer [9] uses positive orthogonal random features to approximate the softmax kernel. Reformer [20] gathers similar tokens together through locality-sensitive hashing (LSH) and performs attention amongst tokens within the same bucket. Of all methods above, our method is most similar to Reformer, in the sense that we adaptively assign queries and keys into clusters and form a low-rank sparse attention pattern. However, our method performs soft-clustering with much less structural constraints, allowing each attention head to represent a wider variety of dependency structure and to adjust its sparsity towards full attention if needed. Adaptive Sparsity. With respect to flexible training between sparse and dense attention, there exist some works that parameterize how sparse the attention pattern should be based on the input. The Adaptive Sparse Transformer [11] proposed replacing the usual softmax activation with α-entmax, in which the α parameter can be differentiably trained to adjust the activation between softmax and sparsemax activation [25]. SparseBERT [34] uses a differentiable masking technique where each attention mask is sampled from a Gumbel-sigmoid distribution using data-independent mask probability parameters. While these methods possess the flexibility to adjust between sparse and full attention based on data, they still require full computation of the attention score matrix before sparsification, and hence are unable to leverage the learned sparsity towards better model efficiency. To the best of our knowledge, ours is the first work to be able to adaptively tune its attention sparsity between sparse to full attention without requiring the explicit computation of the attention score matrix, thereby avoiding quadratic cost when possible. Stochastic Block Models. The Stochastic Block Model (SBM) is a generative model that encodes the latent structure of graphs by grouping nodes into clusters. By modeling the cluster-membership of each node as well as inter-cluster relationships, SBMs can represent a wide variety of graph structures, which is a feature especially useful for generating new graphs or predicting missing edges in noisy data [1]. The standard SBM assigns each node to a single cluster, and the probability of an edge between two nodes strictly depends on the corresponding clusters. Several structural extensions include overlapping SBM [22] and mixed-membership SBM [2], which allow each node to be assigned to multiple clusters. The underlying SBM used by our framework mostly resembles these two variants, while the edge probability is modeled by a nonlinear function of two node embeddings rather than a bilinear one. There exist many other extensions including degree-corrected SBM [18] for multi-graphs and hierarchical SBM [29] for multiplex-graphs. Further details can be found in a recent survey [15]. 3 Preliminaries: Sparse Transformers We first introduce the full attention mechanism used in the original Transformer [38] as well as masked attention which will serve as a backbone of our approach. 3.1 Full Attention In vanilla Transformer [38], each attention head takes a sequence of token features as input X ∈ Rn×d where n is the sequence length and d the embedding dimension. Weight parameters WQ,WK ∈ Rd×dh and W V ∈ Rd×dh with head-dimension dh first maps the input features X into query Q, key K, and value V , respectively. Then, the attention score matrix is computed with scaled dot-product of queries and keys followed by row-wise softmax activation σ(·). Note that explicit computation of this matrix is the main bottleneck of full attention, incurring O(n2) asymptotic cost in both time and memory. The value features V are then pooled based on the attention scores, returning the output token representations. Altogether, the operation performed by each attention head can be written as Q = XWQ, K = XWK , V = XW V (1) Attn(X) = σ ( QKT√ dh ) V . (2) 3.2 Masked Attention One way to remove the quadratic bottleneck from the attention score matrix is to apply a binary mask M ∈ {0, 1}n×n and compute the scaled dot-products QiKTj / √ dh only if Mij = 1. In presence of an attention mask, the operation is modified to Attnmask(X,M) = σM ( M ⊙ QK T √ dh ) V (3) σM (A)ij := exp(Aij)∑ k∈{k′|Mik′=1} exp(Aik) if Mij = 1 0 otherwise (4) where ⊙ indicates entry-wise multiplication. Note that the masked-softmax σM (·) operator only computes unmasked terms, ensuring that each (i, j)-th attention score survives as nonzero if and only if Mij = 1. This is thus equivalent to filling in the (i, j)-th attention score with −∞ if Mij = 0, then applying the standard softmax operator. Most sparsity-based efficient Transformers fall under this formulation, while using different methods to either manually fix or learn the mask M . For instance, local attention [8, 3, 46] with a sliding window sets Mij = 1 if |i− j| < c for some context window size c while Reformer [20] sets Mij = 1 if Qi and Kj are hashed into the same bucket. 4 Our Method: SBM-Transformer Here we discuss the details of SBM-Transformer (Figure 2). We first illustrate the forward step of our attention module and how the underlying SBM [2] of each head, from which we sample our attention masks, is parameterized by the input tensors. We then discuss how the model enables end-to-end differentiability despite the discrete graph sampling step. 4.1 Forward step with the Stochastic Block Model In our framework, we view the attention mask M as an adjacency matrix of a bipartite graph that connects queries to keys, and let each attention head sample an adjacency matrix that best represents the contextual dependencies amongst input tokens. In order to efficiently sample adjacency matrices while avoiding the quadratic cost, the distribution of graphs must first be parameterized with a sub-quadratic number of latent variables. Stochastic Block Models fit perfectly for our purpose as it models graphs that are low-rank structured with k latent clusters, allowing full parameterization using O(nk) memory. More concretely, the SBM distribution is defined by two nonnegative nodeto-cluster memberships Y ,Z ∈ Rn×k+ and a so-called block matrix B ∈ Rk×k+ that stores the inter-cluster connection probabilities. The probability of node i being connected to node j is computed as p(i, j) = YiBZTj . Equivalently, the expectation of the adjacency matrix sampled from A ∼ SBM(Y ,B,Z) can be written as E[A] = Y BZT . For proper parameterization of the SBM, we must infer the nonnegative node-memberships and block matrix from the queries and keys. To do so, we equip each attention head a 2-layer MLPdh→dh with ReLU activation, and a set of k trainable cluster-embeddings C ∈ Rk×dh . First, our model computes the block matrix Ŝ ∈ Rk×k+ by taking dot products amongst cluster-embeddings C followed by a 2-dimensional softmax activation. The node embeddings are obtained by processing each query and key through the MLPdh→dh , mapping token representations into the node representation space. The memberships of query and key nodes, which we denote by Q̂ and K̂, are then inferred by taking dot products of node and cluster embeddings, followed by a sigmoid function. The block matrix Ŝ, query node-memberships Q̂, and key node-memberships K̂ altogether provide a well-defined parameterization for the SBM. Thus, a bipartite graph adjacency M ∈ {0, 1}n×m can be sampled from M ∼ SBM(Q̂, Ŝ, K̂) with expectation E[M ] = Q̂ŜK̂T : the probability of connecting query Qi to key Kj equals p(i, j) = Q̂iŜK̂Tj . Formally, the sampling procedure can be written as Algorithm 1: fastRG(Y ,B,Z)[33] Input :Y ∈ Rn×k+ , B ∈ Rk×k+ , Z ∈ Rn×k+ Output :M ∈ {0, 1}n×n with E[M ] = Y BZT 1 Compute diagonal matrices DY = (diag(1Y ))−1 and DZ = (diag(1Z))−1 2 Column-normalize Y = Y D−1Y and Z = ZD −1 Z 3 Compute B = DY BDZ 4 Sample number of edges m ∼ Poisson(1B1T ) 5 Initialize M = 0 6 for i = 1 : m do 7 Sample (U, V ) from {1, . . . , k} × {1, . . . , k} with Pr(U = u, V = v) ∝ Buv 8 Sample source I from {1, . . . , n} with Pr(I = i) = Y iU . 9 Sample destination J from {1, . . . , n} with Pr(J = j) = ZjV 10 Set MIJ = 1. 11 end Ŝ = softmax(CCT ) (5) Q̂ = sigmoid(MLPdh→dh(Q)C T ) (6) K̂ = sigmoid(MLPdh→dh(K)C T ) (7) M ∼ SBM(Q̂, Ŝ, K̂) (8) For the last sampling step, we incorporate a fast random graph sampling algorithm fastRG (Alg. 1, [33]) that can sample graphs from a SBM in time and memory asymptotically linear in the number of edges. One advantage of fastRG is that each edge can be sampled in parallel, allowing high efficiency with the help of multiprocessing. A more significant feature of the method is that the number of edges, which determines the overall cost, is sampled from a Poisson distribution with input-dependent mean (Line 4). Thus, the model can dynamically adjust its computational cost between linear and quadratic in sequence length based on the data. Figure 3 shows example placements of nodes and clusters on the dh-dimensional space to show how the sparse structure is determined. If all nodes and clusters are gathered closely, then all entries in Q̂ and K̂ become close to 1, resulting in p(i, j) ≈ 1 for all i, j and hence a dense M . If clusters are well-separated but each surrounded by some set of nodes, Ŝ becomes close to diagonal while each row in Q̂ and K̂ is close to a one-hot vector indicating the cluster nearby. Such setting leads to a block diagonal mask similar to LSH bucketing of Reformer [20]. Lastly, if all clusters are far apart from the nodes, both Q̂ and K̂ approximately equal zero, zeroing out all the edge probabilities. 4.2 Backward Step with Straight-Through Estimator The graph sampling procedure is naturally a discrete operation. Thus, naive backpropagation cannot learn the proper parameterization for the SBM that minimizes the predictive loss. To cope with this non-differentiability, we incorporate a Straight-Through Estimator (STE) [4] to pass the gradient beyond the discrete sampling step. The STE enables providing the gradient ∂L/∂Mij to the probability for each sampled edge (i, j) (Eqn. 9). It works as if we had used a continuous mask M ⊙ E[M ] that stores the probability of each sampled edge instead of the binary mask M during forward propagation. This way, the probabilities of sampled edges can be learned end-to-end: the gradients provide information on whether each sampled edge was useful or not for prediction. ∂L ∂pij := ∂L ∂Mij = ∂L ∂Aij · QiK T j√ dh if Mij = 1 0 otherwise where A := M ⊙ QK T √ dh (9) (b) Block-diagonal(a) Dense (c) Sparse Random Edge Exploration. While this approach enables backpropagation in the same O(m) cost as in the forward step, this comes at the expense of not being able to propagate information through edges that were not sampled. This can be problematic when an edge probability accidentally collapses to zero, after which the edge becomes unlikely to ever be sampled even when it may be useful for the prediction task at hand. Therefore, we add a small perturbation δ > 0 to each edge probability pij , allowing the model to explore new edges and resuscitate their sampling probabilities if necessary. We find that a δ as small as 0.01 significantly helps in practice, and thus use this edge exploration scheme during training for our experiments. Wouldn’t the model always prefer full attention? Note that the gradient ∂L/∂pij can be positive, which suppresses the probability of edge (i, j). At first, it may seem counter-intuitive why the model would ever limit itself to using fewer edges during training without any sparsity-based regularizations. One explanation is that masked attention provides an easy way to reduce attention scores under finite head dimensions. Under full attention, it is known that the representational space of attention score matrices is limited by the head dimension and softmax activation [5]. This limitation inevitably introduces unwanted noise in the attention scores especially when working with long sequences. In SBM-Transformer, however, the structural sparsity in masked attention introduces another dimension that induces a larger space of row-stochastic matrices (full attention is a special case of masked attention where Mij = 1 for all i, j). Therefore, it is reasonable that the model may encourage sparsity to leverage the additional expressiveness assuming the loss landscape has local optima within the sparse attention regime. Our experiments on the LRA benchmark show that this is indeed the case, as our SBM-Transformer converges to an average attention sparsity of 20% to 30% while outperforming Transformer with full attention. We also show in the experiment that we can easily incorporate additional regularization that further encourages sparse attention masks. 4.3 SBM-Transformer is a Universal Approximator Leveraging previous work on the theoretical expressiveness of sparse attention [45, 46], we show that SBM-Transformer with a small modification1 retains the same level of expressibility as full attention. Specifically, we show that the low-rank structure of the underlying SBMs does not degrade the expressive power of Transformer, and that SBM-Transformer can universally approximate arbitrary functions with O(n) connections. For brevity, we provide a rough overview of the proof and defer further details to Appendix A. Theorem 1. Let f ∈ F be class of continuous sequence-to-sequence functions. T h,r,mSBM denote the class of SBM-Transformers with h attention heads, m head dimension, and r dimensions in hidden layers. Then for any ϵ > 0 and 1 ≤ p < ∞, there exists a function g ∈ T h,m,rSBM such that∫ D ∥f(X)− E[g(X)]∥ppdX ≤ ϵ (10) 1Here we consider a variant of SBM-Transformer where self-loops are added manually (i.e. Mii = 1 for all i). While this is useful in theoretical analysis, we find that not having self-loops slightly helps in empirical performance and hence omit self-loops for the main experiments. According to the main theorem of Yun et al. (2020) [44], SBM-Transformer achieves universal approximability if 1) each node attends to itself, 2) the aggregation of all attention patterns contains a Hamiltonian path, and 3) there exists a path between all node pairs. While the first condition is trivially true due to our modification, the other two conditions require careful choice of three SBMs. Here we first parameterize one SBM to hard-assign tokens into k equally-sized clusters, inducing a block-diagonal attention pattern. The other two SBMs are parameterized such that the two graphs together form a star graph with k global relay tokens. Combining the three attention patterns lead to a parameterization of SBM-Transformer that satisfies all three conditions, hence proving the theorem. 5 Experiments For empirical evaluations, we first use a synthetic task to show that our model is flexible enough to learn towards full attention when needed in contrast to previous works. We then experiment on Long Range Arena (LRA) [36], a benchmark widely used to assess the capacity of efficient Transformers in learning long-range contexts across different modalities. Lastly, we show results on the GLUE benchmark [39] to assess the performance of SBM-Transformer in a downstream NLP setting. All experiments were run on a remote GCP server equipped with 16 NVIDIA A100 Tensor Core GPUs. 5.1 Synthetic Task: Finding Repeated Tokens Dataset. We formulate a token-level binary classification task as follows: each input sequence consists of N integers, each of which is uniformly sampled from {1, 2, . . . , N}. We use N = 256 in our setup. The prediction target is a sequence of equal length, where each token is labeled 1 if there exists a duplicate somewhere within the sequence, and 0 otherwise. Below is a simple example with N = 8 that illustrates the task. We measure the performance of models via binary cross-entropy loss. Input: 1 4 3 7 3 2 3 1⇒ Target: 1 0 1 0 1 0 1 1 Methods. For this task, we compare SBM-Transformer with k = 128 clusters against various efficient Transformers: Linear Transformer [19], Linformer [40], Reformer [20], Performer [9], and Nyströmformer [43]. Across all methods, we use a single-layer and single-head architecture with 32 hidden dimensions. Note that due to this constrained setting, the sole head must perform full attention to compare each token to all the others in order to attain 100% accuracy. All models are trained for 2000 epochs where a new batch of sequences is sampled on-the-fly at each epoch. We use a batch size of 256 and learning rate of 1e-3. Results. Figure 4 shows the training loss curves of each baseline method as well as SBMTransformer. Full attention quickly converges to 100% accuracy, which is expected as it computes all possible pairwise interactions by default. Other models that apply low-rank or kernelized attention fail to achieve the same level of accuracy, due to limited expressibility under the constrained setting. Though SBM-Transformer converges more slowly compared to full-attention, it demonstrates the ability to drive itself towards full-attention, eventually attaining zero loss. 5.2 Long Range Arena (LRA) To demonstrate that the flexible inductive bias of SBM-Transformer is effective for modeling longrange dependencies, we test SBM-Transformer against previous work on the LRA benchmark. We also test how the performance is affected with respect to applying a sparsity-based regularizer. Dataset. LRA [36] consists of five different testbeds with varying modalities: LISTOPS [26] is a 10-way classification task to map a sequence of single-digit numbers and 4 different set operations, to its corresponding solution. TEXT [24] is a binary classification task where byte-level IMDB movie reviews must be classified into one of positive or negative sentiments. RETRIEVAL [30] is also a char-level binary classification task, where two sequences from ACL Anthology papers are given as input, and the model must predict whether there exists a citation link between them. IMAGE [21] is a 10-way classification task mapping flattened pixel-sequences from CIFAR-10 to its class. PATHFINDER [23] provides flattened pixel-sequences from an image and the model must decide whether two circles in the image are connected by a dashed line. For this benchmark, we use the PyTorch implementation of LRA provided by the authors of Nyströmformer [43] and adhere to the same train-test splits. Performance in all five tasks is measured using classification accuracy. Methods. We compare SBM-Transformer against the same baselines as with the synthetic task above. For fair comparison, we set all Transformer models to use the default setting used in [43], which fixes 2 layers, 2 attention heads, and 64 embedding dimensions. For SBM-Transformer, we use k = 128 clusters. The output token representations are mean-pooled to obtain the sequence representation for all tasks. More details on the architecture setups can be found in Appendix C. Results. Table 1 shows the test accuracies of each method. Our SBM-Transformer achieves the best overall performance, ranking first in two tasks, and second in one other. SBM-Transformer also outperforms full attention in all five tasks while computing 30% or less attention scores on average, which supports our claim that masked attention with partial attention score computations can be preferred over full attention depending on the task. With respect to the attention mask structure, we find that flexibility of SBM is indeed beneficial, as Reformer struggles in LISTOPS, most likely due to the inability of block-diagonal masks to model hierarchical contexts. Mask Density Regularization. To test if the model can effectively learn under a constraint on the computational cost, we also test the model under a sparsity-based regularizer that discourages excessive use of query-key edges. We penalize each sampled edge by adding to the predictive loss a weighted regularization term λLs, where Ls denotes the average mask density across all attention heads. Table 2 shows the performance of SBM-Transformer across varying regularization weights. Under strong regularization, the model surprisingly retains competitive performance while significantly reducing the average mask density. This indicates that similar local optima are shared across regimes with varying attention density in the loss landscape, and the regularization term is able to drive the model towards finding optimal attention scores with smaller density. Efficiency. Furthermore, we compare computational costs during inference by measuring FLOP count and peak memory usage. For SBM-Transformer, we test the model trained under λ = 10−1. Due to lack of support for sparse tensor operations in existing FLOP-counters, we measure FLOP counts by manually enumerating through each tensor operation. Table 3 shows that SBM-Transformer is comparably efficient across all tasks except for TEXT, where SBM-Transformer showed the largest average mask density. Note that while the cost of other baselines are fixed after initialization, the cost of SBM-Transformer is data-adaptive and can vary input-by-input. Further analysis and qualitative examples demonstrating the input-dependent attention mask densities can be found in Appendix C. Layerwise Diversity in Sparsity. We also compare the densities of masks sampled at each layer of SBM-Transformer during test time to examine whether our model is capable of diversifying sparsity across layers for better performance. Recall that this allows models to gather information in different levels, as seen in pretrained BERT where lower layers focus on the overall content via dense attention while upper layers gather syntactic information with tree-like patterns [10]. For each of the five tasks, we pick two highest-performing models (one for unregularized and another for regularized) for measurement. Figure 5 shows the average layer-wise mask densities of unregularized and regularized SBM-Transformers across different tasks. We find that under no regularization, the two layers can differ by more than 10% in tasks such as LISTOPS and IMAGE. This may be due to the hierarchical and compositional structure of the two tasks. We also find that the variation is relatively low in TEXT with densities around 25%, indicating that the task requires broad attention overall. Lastly, the standard deviation is extremely large in upper layers for PATHFINDER, showing that it samples a wide variety of masks depending on the input. 5.3 General Language Understanding Evaluation (GLUE) To check whether its strong performance demonstrated in LRA extends to the downstream NLP setting as well, we evaluate SBM-Transformer against baselines on the GLUE benchmark [39]. Dataset. We consider four NLP tasks in GLUE [39]. SST-2 [35] consists of movie reviews the model must predict their positive or negative sentiments. For QQP [7], the task is to determine whether one question is a paraphrase of the other given a pair of questions. MNLI [42] consists of sentence pairs, each with a target label indicating whether the two sentences are connected through entailment, contradiction, or neither. QNLI [31] consists of sentence-question pairs and the task is to determine whether the sentence contains an answer to the question. Each task is formulated as sequence classification, and we measure performance by F1 score on the respective validation sets. Methods. Following previous work [43], we arrange a small variant of BERT [13] with 4 layers, 8 attention heads, and 512 embedding dimensions. We replace full attention with each attention module used in previous experiments. For SBM-Transformer, we use k = 128 clusters without sparsity regularization (i.e. λ = 0). Here, we find that adding local attention significantly boosts performance, and thus fix a sliding window of size 64 to SBM-Transformer. We first pretrain each model under the masked language modeling objective for 50 epochs on a corpus with text from English Wikipedia, BookCorpus [50], and RealNews [47]. We then finetune each pretrained model for 5 epochs on the GLUE training sets. More details on the architecture and training setup can be found in Appendix C. Results. Table 4 reports the F1 scores of each method on different NLP tasks. SBMTransformer performs competitively against full attention overall, and outperforms all baselines in SST-2 and QQP. We also find that the fine-tuned SBM-Transformer models use 13.5% dense attention masks on average across all tasks, showing that the model can encode useful information from input sentences effectively under highly sparse attention. 6 Conclusion We propose SBM-Transformer, an efficient Transformer that can data-adaptively choose its attention sparsity between sparse and full attention without the need to explicitly compute the full attention score matrix. Theoretically, we show that our model enjoys the same expressibility as the original Transformer due to the flexibility of the latent SBM. Empirical experiments on LRA and GLUE show that our model performs competitively against previous state-of-the-art efficient Transformers. Nonetheless, there are limitations due to sparse tensor operations being less optimized on GPU kernels. In the LRA experiments, we found that SBM-Transformer can result in longer runtimes compared to dense counterparts while its memory usage is much lower. While previous sparsity-based attention mechanisms with block-sparse attention are much more amenable for GPU computation [46, 8, 3], our work requires an architecture with better workload balancing and acceleration under unstructured sparsity, for which there is ongoing work [41, 49]. We still believe this work is valuable as it is the first approach to induce per-example attention sparsity, allowing the model to adjust its computational cost based on the input. The cost being dependent on the number of edges also allows practitioners to easily impose constraints based on the available computational resources. We hope to see more GPU-friendly tensor operations optimized for finegrained sparsity in the future, at which point the value of this work will increase even further. As we propose a foundational replacement for the scaled dot-product attention module in the Transformer architecture, we do not expect any immediate negative societal impact due to this work. Acknowledgments and Disclosure of Funding We would like to thank Kun Dong for the insightful comments. This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00926, 2022-0-00959, 2021-0-02068, and 2019-0-00075).
1. What is the focus and contribution of the paper on SBM-Transformer? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of computational complexity and practical impact? 3. How does the reviewer assess the accessibility and readability of the paper, especially for readers not familiar with SBM and generative modeling? 4. What are some minor comments and suggestions provided by the reviewer regarding the content and presentation of the paper? 5. What are the limitations of the method, including its applicability to large GPU clusters and potential numerical instability?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The work under revision proposes SBM-Transformer. A novel variant of a transformer architecture adjusts a sparsity of its attention blocks dynamically, based on input sequence data. It is done so forward and backward evaluation cost is linear in the number of edges. Those are adaptively chosen by deploying a generative model for community detection, stochastic block model (SBM), combined with a fast sampling method, fastRG (Alg. 1, developed in [30]). Such an architecture is then flexible and capable of using a mixture of sparsities of attention blocks across the layers yet still capable of learning a full attention model if required. On average, however, it reduces computational complexity significantly, as documented in the well-chosen experimental section on the benchmark Long Range Arena (LRA) dataset. The paper builds on previous theoretical expressivity results of sparse attention transformers and applies this result to proposed architecture in two Lemmas, resulting in Theorem 1 with proofs in the Appendix. The paper concludes on the limitation of the method. Unfortunately, GPU architectures used to train large transformers are not efficient for sparse tensor operations. Proposed SMB-Transformer thus " ...often results in longer runtimes compared to dense counterparts even though the memory usage is lower ... ". Strengths And Weaknesses Overall, this is an interesting application of sampling methods (an algorithm fastRG[30]) combined with gradient descent optimization with theoretical efficiency guarantees supported by experiments while not sacrificing the performance. To the reviewer's knowledge, this is a novel approach that I believe is of high interest to the ML community. Short term, the practical impact/significance of the work is limited due to NOT being well suited for large GPU clusters - the nowadays platform of choice for large transformer models. Also mentioned in a nicely written Related Work, there are several methods in place, e.g., Reformer [18], using hashing similarity instead of SBM, similar in the idea, be it more constrained. The paper could also be improved in terms of accessibility and readability, especially for a reader not familiar with SBM and generative modeling. For instance, it could help to elaborate in detail how a stochastic block model version from paper [2] (I believed that one is used), translates to SMB-transformer proposed. Especially in section 4.1. Theorem 1: The idea of the proof presented in the main body of the paper is still technical and does not spark the intuition of why a “star graph” structure ensures expressibility after rephrasing the idea of the theorem from [38] on line 213. Moreover, the proof in Appendix relies on previous works, mainly by Yun et al.,(2020) [14], and Theorem 1 is proven by showing that SMB-Transformer meets the necessary assumptions of Yun et al.,(2020) [14]. Proof of Lemma 2 shows expressibility for p=3 sparsity patterns (expressibility is not guaranteed for p=2 as noted in line: 215) and thus proves it for higher p’s. However, it rather seems like a direct application of previously mentioned works on this special case of SMB-Transformer. Could you elaborate on what is a novelty here? Otherwise, I suggest being specific and making it an application of Theorem XY from [] and [] ... line 285 and Table 2. Given the experimental std/errors (in brackets?) presented in the table, I am not sure we can conclude that “… applying a small sparsity regularizer helps in boosting overall sparsity as well as performance …”. I would claim it is rather insignificant looking at average results in the last column! I agree, however, the computation saving effect of regularization vs. no reg. in Figure 5 still holds. So I suggest considering rephrasing or omitting the claim above perhaps. Minor comments: typo: typo Table 1 “the the” Questions To help accessibility of the article (see above) I suggest rewriting/extending the lines between 145 - 164. For instance why bipartite structure? Or what is 2-layer MLPd_h to d_h and why two layers (most likely to retain dimensionality but could you put some motivating lines of thoughts down?) Are there any numerical or other limitations incurred by the use of SBM? In particular, this generative model assumes graph edges follow (mean parameterized) Poisson distribution. Besides one mentioned (line 182) regarding a degeneration of the mask matrix M, I'd encourage authors to elaborate more on this topic. For instance, is there any limitation in terms of learning "outliers" that is small clusters but highly connected clusters/tokens? (SMB is known to work well for the balanced size of clusters, but may struggle for imbalanced ones.) Please find more suggestions in the previous block of comments. Limitations A large part of the conclusions is dedicated to a major limitation - despite using fewer memory run-times on GPU clusters are often longer for SBM-Transformer than for dense models. In my opinion, this is unfortunate concerning the practical impact of this work because a majority of large transformer models GPT-3, PaLM, OPT175B, etc. have been trained in more and more accessible High-Performance Computing (HPC) centers, i.e. supercomputers, with massive GPU partitions. Secondly all results hold "in expectation" (in a weak sense). Could authors elaborate more on the numerical stability/deterioration of the algorithm?
NIPS
Title Robust Sub-Gaussian Principal Component Analysis and Width-Independent Schatten Packing Abstract We develop two methods for the following fundamental statistical task: given an -corrupted set of n samples from a d-dimensional sub-Gaussian distribution, return an approximate top eigenvector of the covariance matrix. Our first robust PCA algorithm runs in polynomial time, returns a 1−O( log −1)-approximate top eigenvector, and is based on a simple iterative filtering approach. Our second, which attains a slightly worse approximation factor, runs in nearly-linear time and sample complexity under a mild spectral gap assumption. These are the first polynomial-time algorithms yielding non-trivial information about the covariance of a corrupted sub-Gaussian distribution without requiring additional algebraic structure of moments. As a key technical tool, we develop the first widthindependent solvers for Schatten-p norm packing semidefinite programs, giving a (1 + )-approximate solution in O(p log( ) −1) input-sparsity time iterations (where n, d are problem dimensions). N/A −1) input-sparsity time iterations (where n, d are problem dimensions). 1 Introduction We study two natural, but seemingly unrelated, problems in high dimensional robust statistics and continuous optimization respectively. As we will see, these problems have an intimate connection. Problem 1: Robust sub-Gaussian principal component analysis. We consider the following statistical task, which we call robust sub-Gaussian principal component analysis (PCA). Given samples X1, . . . , Xn from sub-Gaussian1 distribution D with covariance Σ, an fraction of which are arbitrarily corrupted, the task asks to output unit vector u with u>Σu ≥ (1 − γ) ‖Σ‖∞2 for tolerance γ. Ergo, the goal is to robustly return a (1 − γ)-approximate top eigenvector of the covariance of sub-Gaussian D. This is the natural extension of PCA to the robust statistics setting. There has been a flurry of recent work on efficient algorithms for robust statistical tasks, e.g. covariance estimation and PCA. From an information-theoretic perspective, sub-Gaussian concentration suffices for robust covariance estimation. Nonetheless, to date all polynomial-time algorithms achieving nontrivial guarantees on covariance estimation (including PCA specifically) in the presence of adversarial noise require additional algebraic structure. For instance, sum-of-squares certifiably bounded moments have been leveraged in polynomial time covariance estimation algorithms [HL18, KSS18]; however, this is a stronger assumption than sub-Gaussianity. In many applications (see discussion in [DKK+17]), the end goal of covariance estimation is PCA. Thus, a natural question which relaxes robust covariance estimation is: can we robustly estimate the top eigenvector of the covariance Σ, assuming only sub-Gaussian concentration? Our work answers this question affirmatively via two incomparable algorithms. The first achieves γ = O( log −1) in 1See Section 2 for a formal definition. 2Throughout we use ‖M‖p to denote the Schatten p-norm (cf. Section 2 for more details). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. polynomial time; the second achieves γ = O( √ log −1 log d), in nearly-linear time under a mild gap assumption on Σ. Moreover, both methods have nearly-optimal sample complexity. Problem 2: Width-independent Schatten packing. We consider a natural generalization of packing semidefinite programs (SDPs) which we call Schatten packing. Given symmetric positive semidefinite A1, . . . ,An and parameter p ≥ 1, a Schatten packing SDP asks to solve the optimization problem min ∥∥∥∥∥∥ ∑ i∈[n] wiAi ∥∥∥∥∥∥ p subject to w ∈ ∆n. (1) Here, ‖M‖p is the Schatten-p norm of matrix M and ∆n is the probability simplex (see Section 2). When p = ∞, (1) is the well-studied (standard) packing SDP objective [JY11, ALO16, PTZ16], which asks to find the most spectrally bounded convex combination of packing matrices. For smaller p, the objective encourages combinations more (spectrally) uniformly distributed over directions. The specialization of (1) to diagonal matrices is a smooth generalization of packing linear programs, previously studied in the context of fair resource allocation [MSZ16, DFO18]. For the `∞ case of (1), packing SDPs have the desirable property of admitting “width-independent” approximation algorithms via exploiting positivity structure. Specifically, width-independent solvers obtain multiplicative approximations with runtimes independent or logarithmically dependent on size parameters of the problem. This is a strengthening of additive notions of approximation typically used for approximate semidefinite programming. Our work gives the first width-independent solver for Schatten packing. 1.1 Previous work Learning with adversarial outliers. The study of estimators robust to a small fraction of adversarial outliers dates back to foundational work, e.g. [Hub64, Tuk75]. Following more recent work [LRV16, DKK+19], there has been significant interest in efficient, robust algorithms for statistical tasks in high-dimensional settings. We focus on methods robustly estimating covariance properties here, and defer a thorough discussion of the (extensive) robust statistics literature to [Ste18, Li18, DK19]. There has been quite a bit of work in understanding and giving guarantees for robust covariance estimation where the uncorrupted distribution is exactly Gaussian [DKK+17, DKK+18, DKK+19, CDGW19]. These algorithms strongly use relationships between higher-order moments of Gaussian distributions via Isserlis’ theorem. Departing from the Gaussian setting, work of [LRV16] showed that if the distribution is an affine transformation of a 4-wise independent distribution, robust covariance estimation is possible. This was extended by [KSS18], which also assumed nontrivial structure in the moments of the distribution, namely that sub-Gaussianity was certifiable via the sum-of-squares proof system. To the best of our knowledge it has remained open to give nontrivial guarantees for robust estimation of any covariance properties under minimal assumptions, i.e. sub-Gaussian concentration. All aforementioned algorithms also yield guarantees for robust PCA, by applying a top eigenvector method to the learned covariance. However, performing robust PCA via the intermediate covariance estimation step is lossy, both statistically and computationally. From a statistical perspective, Ω(d2) samples are necessary to learn the covariance of a d-dimensional Gaussian in Frobenius norm (and for known efficient algorithms for spectral norm error [DKS17]); in contrast, O(d) samples suffice for (non-robust) PCA. Computationally, even when the underlying distrubution is exactly Gaussian, the best-known covariance estimation algorithms run in time Ω(d3.25); algorithms working in more general settings based on the sum-of-squares approach require much more time. In contrast, the power method for PCA in a d× d matrix takes time Õ(d2)3. Motivated by this, our work initiates the direct study of robust PCA, which is often independently interesting in applications. We remark there is another problem termed “robust PCA” in the literature, e.g. [CLMW11], under a different generative model. We defer a detailed discussion to [DKK+17], which experimentally shows that algorithms from that line of work do not transfer well to our corruption model. Width-independent iterative methods. Semidefinite programming (SDP) and its linear programming specialization are fundamental computational tasks, with myriad applications in learning, operations research, and computer science. Though general-purpose polynomial time algorithms exist for 3We say g = Õ(f) if g = O(f logc f) for some constant c > 0. SDPs ([NN94]), in practical settings in high dimensions, approximations depending linearly on input size and polynomially on error are sometimes desirable. To this end, approximation algorithms based on entropic mirror descent have been intensely studied [WK06, AK16, GHM15, AL17, CDST19], obtaining additive approximations to the objective with runtimes depending polynomially on ρ/ , where ρ is the “width”, the largest spectral norm of a constraint. For structured SDPs, stronger guarantees can be obtained in terms of width. Specifically, several algorithms developed for packing SDPs ((1) with p =∞) yield (1+ )-multiplicative approximations to the objective, with logarithmic dependence on width [JY11, PTZ16, ALO16, JLL+20]. As ρ upper bounds objective value in this setting, in the worst case runtimes of width-dependent solvers yielding ρ-additive approximations have similar dependences as width-independent counterparts. Widthindependent solvers simultaneously yield stronger multiplicative bounds at all scales of objective value, making them desirable in suitable applications. In particular, `∞ packing SDPs have found great utility in robust statistics algorithm design [CG18, CDG19, CDGW19, DL19]. Beyond `∞ packing, width-independent guarantees in the SDP literature are few and far between; to our knowledge, other than the covering and mixed solvers of [JLL+20], ours is the first such guarantee for a broader family of objectives4. Our method complements analogous `p extensions in the width-dependent setting, e.g. [ALO15], as well as width-independent solvers for `p packing linear programs [MSZ16, DFO18]. We highlight the fair packing solvers of [MSZ16, DFO18], motivated by problems in equitable resource allocation, which further solved `p packing variants for p 6∈ [1,∞). We find analogous problems in semidefinite settings interesting, and defer to future work. Concurrent work. Concurrent work by Kong et al. [KSKO20] also develops a PCA algorithm tolerant to a bounded fraction of adversarial corruption. Their method is similar to our algorithm based on soft downweighting (Algorithm 6), is analyzed under a fourth moment bound assumption (as opposed to sub-Gaussianity as in this paper), and also generalizes to top-k eigenvector estimation. To our knowledge, our fast algorithm (Algorithm 4) is the first in the literature which robustly solves the 1-PCA problem in near-linear time (for gapped covariances), at the cost of weaker error bounds. 1.2 Our results Robust sub-Gaussian principal component analysis. We give two algorithms for robust subGaussian PCA5. Both are sample optimal, polynomial-time, and assume only sub-Gaussianity. The first is via a simple filtering approach, as summarized in the following (and developed in Section 3). Theorem 1. Under Assumption 1, let δ ∈ [0, 1], and n = Ω ( d+log δ−1 ( log −1)2 ) . Algorithm 6 runs in time O(nd 2 log n δ log n δ ), and outputs u with u >Σu > (1− C? log −1)‖Σ‖∞, for C? a fixed multiple of parameter c in Assumption 1, with probability at least 1− δ. Our second algorithm is more efficient under mild conditions, but yields a worse approximation 1− γ for γ = O( √ log −1 log d). Specifically, if there are few eigenvalues of Σ larger than 1− γ, our algorithm runs in nearly-linear time. Note that if there are many eigenvalues above this threshold, then the PCA problem itself is not very well-posed; our algorithm is very efficient in the interesting setting where the approximate top eigenvector is identifiable. We state our main algorithmic guarantee here, and defer details to Section 5. Theorem 2. Under Assumption 1, let δ ∈ [0, 1], n = Ω ( d+log δ−1 ( log −1)2 ) , γ = C √ log −1 log d, for C a fixed multiple of parameter c from Assumption 1, and let t ∈ [d] satisfy Σt+1 < (1− γ) ‖Σ‖∞. Algorithm 4 outputs a unit vector u ∈ Rd with u>Σu ≥ (1− γ)‖Σ‖∞ in time Õ( nd 4.5 + ndt 1.5 ). Since Ω(d −2) samples are necessary for a (1− )-approximation to the top eigenvector of Σ via uncorrupted samples, our first method is sample-optimal, as is our second up to a Õ( −1) factor. Width-independent Schatten packing. Our second method crucially requires an efficient solver for Schatten packing SDPs. We demonstrate that Schatten packing, i.e. (1) for arbitrary p, admits width-independent solvers. We state an informal guarantee, and defer details to Section 4. 4In concurrent and independent work, [CMY20] develops width-independent solvers for Ky-Fan packing objectives, a different notion of generalization than the Schatten packing objectives we consider. 5We follow the distribution and corruption model described in Assumption 1. Theorem 3. Let {Ai}i∈[n] ∈ Sd≥0, and > 0. There is an algorithm taking O( p log(nd ) ) iterations, returning a 1 + multiplicative approximation to the problem (1). For odd p, each iteration can be implemented in time nearly-linear in the number of nonzeros amongst all {Ai}i∈[n]. 2 Preliminaries General notation. [n] denotes the set 1 ≤ i ≤ n. The operation ◦ applied to two vectors of equal dimension is their entrywise product. Applied to a vector, ‖·‖p is the `p norm; applied to a symmetric matrix, ‖·‖p is the Schatten-p norm, i.e. the `p norm of the spectrum. The dual norm of `p is `q for q = pp−1 ; when p = ∞, q = 1. ∆ n is the n-dimensional simplex (subset of positive orthant with `1-norm 1) and we define Snε ⊆ ∆n to be the truncated simplex: Snε := { w ∈ Rn≥0 ∣∣∣∣∣ ‖w‖1 = 1, w ≤ 1n(1− ε) entrywise } . (2) Matrices. Sd is d × d symmetric matrices, and Sd≥0 is the positive semidefinite subset. I is the identity of appropriate dimension. λmax, λmin, and Tr are the largest and smallest eigenvalues and trace of a symmetric matrix. For M,N ∈ Sd, 〈M,N〉 := Tr (MN) and we use the Loewner order , (M N iff N−M ∈ Sd≥0). The seminorm of M 0 is ‖v‖M := √ v>Mv. Fact 1. For A, B with compatible dimension, Tr(AB) = Tr(BA). For M,N ∈ Sd≥0, 〈M,N〉 ≥ 0. Fact 2. We have the following characterization of the Schatten-p norm: for M ∈ Sd, and q = pp−1 , ‖M‖p = sup N∈Sd, ‖N‖q=1 〈N,M〉 . For M = ∑ j∈[d] λiviv > i , the satisfying N is ∑ j∈[d]±λ p−1 i viv > i ‖M‖p−1p , so NM has spectrum |λ| p ‖M‖p−1p . Distributions. We denote drawing vector X from distribution D by X ∼ D, and the covariance Σ of D is EX∼D [ XX> ] . We say scalar distribution D is γ2-sub-Gaussian if EX∼D[X] = 0 and EX∼D [exp (tX)] ≤ exp ( t2γ2 2 ) ∀t ∈ R. Multivariate D has sub-Gaussian proxy Γ if its restriction to any unit v is ‖v‖2Γ-sub-Gaussian, i.e. EX∼D [ exp ( tX>v )] ≤ exp ( t2 ‖v‖2Γ 2 ) for all ‖v‖2 = 1, t ∈ R. (3) We consider the following standard model for gross corruption with respect to a distribution D. Assumption 1 (Corruption model, see [DKK+19]). Let D be a mean-zero distribution on Rd with covariance Σ and sub-Gaussian proxy Γ cΣ for a constant c. Denote by index set G′ with |G′| = n a set of (uncorrupted) samples {Xi}i∈G′ ∼ D. An adversary arbitrarily replaces n points in G′; we denote the new index set by [n] = B ∪G, where B is the (unknown) set of points added by an adversary, and G ⊆ G′ is the set of points from G′ that were not changed. As we only estimate covariance properties, the assumption that D is mean-zero only loses constants in problem parameters, by pairing samples and subtracting them (cf. [DKK+19], Section 4.5.1). 3 Robust sub-Gaussian PCA via filtering In this section, we sketch the proof of Theorem 1, which gives guarantees on our filtering algorithm for robust sub-Gaussian PCA. This algorithm obtains stronger statistical guarantees than Theorem 2, at the cost of super-linear runtime; the algorithm is given as Algorithm 6. Our analysis stems largely from concentration facts about sub-Gaussian distributions, as well as the following (folklore) fact regarding estimation of variance along any particular direction. Lemma 1. Under Assumption 1, let δ ∈ [0, 1], n = Ω ( log δ−1 ( log −1)2 ) , and u ∈ Rd be a fixed unit vector. Algorithm 5, 1DRobustVariance, takes input {Xi}i∈[n], u, and , and outputs σ2u with |u>Σu−σ2u| < Cu>Σu · log −1 with probability at least 1− δ, and runs in time O(nd+n log n), for C a fixed multiple of the parameter c in Assumption 1. In other words, we show that using corrupted samples, we can efficiently estimate a 1 +O( log −1)- multiplicative approximation of the variance of D in any unit direction6. This proof is deferred to Appendix B for completeness. Algorithm 6 combines this key insight with a soft filtering approach which has found many applications in the recent robust statistics literature, suggested by the following known structural fact found in previous work (e.g. Lemma A.1 of [DHL19], see also [SCV17, Ste18]). Lemma 2. Let {ai}i∈[m], {wi}i∈[m] be sets of nonnegative reals, and amax = maxi∈[m] ai. Define w′i = ( 1− aiamax ) wi, for all i ∈ [m]. Consider any disjoint partition IB , IG of [m] with∑ i∈IB wiai > ∑ i∈IG wiai. Then, ∑ i∈IB wi − w ′ i > 1 2amax ∑ i∈[m] wiai > ∑ i∈IG wi − w ′ i. Our Algorithm 6, PCAFilter, takes as input a set of corrupted samples {Xi}i∈[n] following Assumption 1 and the corruption parameter . At a high level, it initializes a uniform weight vector w(0), and iteratively operates as follows (we denote by M(w) the empirical covariance ∑ i∈[n] wiXiX > i ). 1. ut ← approximate top eigenvector of M(w(t−1)) via power iteration. 2. Compute σ2t ← 1DRobustVariance({Xi}i∈[n], ut, ). 3. If σ2t > (1−O( log −1)) · u>t M(w(t−1))ut, then terminate and return ut. 4. Else: (a) Sort indices i ∈ [n] by ai ← 〈ut, Xi〉2, with a1 smallest. (b) Let ` ≤ i ≤ n be the smallest set for which ∑n i=` wi ≥ 2 , and apply the downweight- ing procedure of Lemma 2 to this subset of indices. The analysis of Algorithm 6 then proceeds in two stages. Monotonicity of downweighting. We show the invariant criteria for Lemma 2 (namely, that for the set ` ≤ i ≤ n in every iteration, there is more spectral mass on bad points than good) holds inductively for our algorithm. Specifically, lack of termination implies M(w(t−1)) puts significant mass on bad directions, which combined with concentration of good directions yields the invariant. The details of this argument can be found as Lemma 11. Roughly uniform weightings imply approximation quality. As Lemma 2 then applies, the procedure always removes more mass from bad points than good, and thus can only remove at most 2 mass total by the corruption model. Thus, the weights w(t) are always roughly uniform (in SnO( )), which by standard concentration facts (see Appendix A) imply the quality of the approximate top eigenvector is good. Moreover, the iteration count is bounded by roughly d because whenever the algorithm does not terminate, enough mass is removed from large spectral directions. Combining with the termination criteria imply that when a vector is returned, it is a close approximation to the top direction of Σ. Details can be found as Lemma 13 and in the proof of Theorem 1. 4 Schatten packing For our second robust PCA algorithm, developed in Section 5, we require a key technical tool which we now develop in this section. The tool, Schatten-norm packing semidefinite programs (and hybrid-norm extensions), is a smoothed generalization of the classical packing semidefinite program, which may be of independent interest in other applications. At a high level, the reason Schatten packing solvers are useful for the robust PCA problem is because while an adversary can fool a PCA algorithm based on operator-norm semidefinite programs by “promoting” a single other eigenvector to have a larger variance, a p-norm-based semidefinite program forces a tradeoff between the number of directions promoted and the amount of variance introduced. 6Corollary 4 gives a slightly stronger guarantee that reusing samples does not break dependencies of u. 4.1 Mirror descent interpretation of [MRWZ16] We begin by reinterpreting the [MRWZ16] solver, which achieves the state-of-the-art parallel runtime for packing LPs7. An (`∞) packing LP algorithm solves the following decision problem.8. Problem 1 (`∞ packing linear program). Given entrywise nonnegative A ∈ Rd×n≥0 , either find primal solution x ∈ ∆n with ‖Ax‖∞ ≤ 1 + or dual solution y ∈ ∆d with A>y ≥ (1− )1. Algorithm 1 PackingLP(A, ) 1: Input: A ∈ Rd×n≥0 , ∈ [0, 1 2 ] 2: K ← 3 log(d) , η ← K −1, T ← 4 log(d) log(nd/ ) 2 3: [w0]i ← n2d for all i ∈ [n], z ← 0, t← 0 4: while Awt ≤ K1, ‖wt‖1 ≤ K do 5: vt ← exp(Awt)‖exp(Awt)‖1 6: gt ← max(0,1−A>vt) entrywise 7: wt+1 ← wt ◦ (1 + ηgt), z ← z + vt, t← t+ 1 8: if t ≥ T then 9: return y ← 1T z 10: end if 11: end while 12: return x← wt‖wt‖1 The following result is shown in [MRWZ16]. Proposition 1. PackingLP (Algorithm 1) solves Problem 1 in O(nnz(A) · log(d) log(nd/ ) 2 ) time. Our interpretation of the analysis of [MRWZ16] combines two ingredients: a potential argument and mirror descent (alternatively known as the “multiplicative weights” framework), which yields a dual feasible point if ‖wt‖1 did not grow sufficiently. Potential argument. The potential used by [MRWZ16] is log( ∑ j∈[d] exp([Awt]j))− ‖wt‖1, wellknown to be a O(log d)-additive approximation of ‖Awt‖∞−‖wt‖1. As soon as ‖Awt‖∞ or ‖wt‖1 reaches the scale O( log d ), by nonnegativity this becomes a multiplicative guarantee, motivating the setting of threshold K. To prove the potential is monotone, [MRWZ16] uses step size K−1 and a Taylor approximation; combining with the termination condition yields the desired claim. Mirror descent. To certify that wt grows sufficiently (e.g. the method terminates in few iterations, else dual feasibility holds), we interpret the step wt+1 ← wt ◦ (1 + ηgt) as approximate entropic mirror descent. Specifically, we track the quantity ∑ 0≤t<T 〈ηgt, u〉, and show that if ‖wt‖1 has not grown sufficiently, then it must be bounded for every u ∈ ∆n, certifying dual feasibility. Formally, for any gt sequence and u ∈ ∆n, we show O(log(nd/ )) + log ( ‖wT ‖1 ‖w0‖1 ) ≥ ∑ 0≤t<T 〈ηgt, u〉 ≥ η ∑ 0≤t<T 〈 1−A>vt, u 〉 . The last inequality followed by gt being an upwards truncation. If ‖wT ‖1 is bounded (else, we have primal feasibility), we show the entire above expression is bounded O(log nd ) for any u. Thus, by setting T = O( log(nd/ )η ) and choosing u to be each coordinate indicator, it follows that the average of all vt is coordinatewise at least 1− , and solves Problem 1 as a dual solution. Our gt is the (truncated) gradient of the function used in the potential analysis, so its form allows us to interpret dual feasibility (e.g. vt has `1 norm 1 and is a valid dual point). Our analysis patterns standard mirror descent, complemented by side information which says that lack of a primal solution can transform a regret guarantee into a feasibility bound. We apply this framework to analyze `p 7The [MRWZ16] solver also generalizes to covering and mixed objectives; we focus on packing in this work. 8Packing linear programs are sometimes expressed as the optimization problem maxx≥0,Ax≤1 ‖x‖1, simi- larly to (1); these problems are equivalent up to a standard binary search, see e.g. discussion in [JLL+20]. variants of Problem 1, via different potentials; our proofs are quite straightforward upon adopting this perspective, and we believe it may yield new insights for instances with positivity structure. 4.2 `p-norm packing linear programs In this section, we give an example of the framework proposed in Section 4.1, for approximately solving `p norm packing linear programs. Specifically, we now consider the generalization of Problem 1 to `p norms; throughout, q = pp−1 is the dual norm. Problem 2 (`p packing linear program). Given entrywise nonnegative A ∈ Rd×n≥0 , either find primal solution x ∈ ∆n with ‖Ax‖p ≤ 1 + or dual solution y ∈ Rd≥0, ‖y‖q = 1 with A>y ≥ (1− )1. For p = log d , Problem 2 recovers Problem 1 up to constants as `p multiplicatively approximates `∞ by 1 + . We now state our method for solving Problem 2 as Algorithm 2. Algorithm 2 PNormPacking(A, , p) 1: Input: A ∈ Rd×n≥0 , ∈ [0, 1 2 ], p ≥ 2 2: η ← p−1, T ← 4p log( nd ) 3: [w0]i ← n2d for all i ∈ [n], z ← 0, t← 0 4: while ‖wt‖1 ≤ −1 do 5: gt ← max(0,1−A>(vt)p−1) entrywise, for vt ← Awt‖Awt‖p 6: wt+1 ← wt ◦ (1 + ηgt), z ← z + (vt)p−1, t← t+ 1 7: if t ≥ T then 8: return y = z‖z‖q 9: end if 10: end while 11: return x = wt‖wt‖1 Other than changing parameters, the only difference from Algorithm 1 is that v is a point with unit `q norm induced by the gradient of our potential Φt. We state our main potential fact, whose proof is based straightforwardly on Taylor expanding ‖·‖p, and deferred to Appendix C for brevity. Lemma 3. In all iterations t of Algorithm 2, defining Φt := ‖Awt‖p − ‖wt‖1, Φt+1 ≤ Φt. We now state our main result, which leverages the potential bound following the framework of Section 4.1. A proof can be found in Appendix C. Theorem 4. Algorithm 2 runs in time O(nnz(A) · p log(nd/ ) ). Further, its output solves Problem 2. 4.3 Schatten-norm packing semidefinite programs We generalize Algorithm 2 to solve Schatten packing semidefinite programs, which we now define. Problem 3. Given {Ai}i∈[n] ∈ Sd≥0, either find primal solution x ∈ ∆n with ∥∥∥∑i∈[n] xiAi∥∥∥ p ≤ 1 + or dual solution Y ∈ Sd≥0, ‖Y‖q = 1 with 〈Ai,Y〉 ≥ 1− for all i ∈ [n]. We assume that p is an odd integer for simplicity (sufficient for our applications), and leave for interesting future work the cases when p is even or noninteger. The potential used in the analysis and an overall guarantee are stated here, and deferred to Appendix C. The proofs are simple modifications of Lemma 3 and Theorem 4 using trace inequalities (similar to those in [JLL+20]) in place of scalar inequalities, as well as efficient approximation of quantities in Line 5 via the standard technique of Johnson-Lindestrauss projections. Lemma 4. In all iterations t of Algorithm 3, defining Φt := ∥∥∥∑i∈[n][wt]iAi∥∥∥ p −‖wt‖1, Φt+1 ≤ Φt. Algorithm 3 SchattenPacking({Ai}i∈[n], , p) 1: Input: {Ai}i∈[n] ∈ Sd≥0, ∈ [0, 12 ], p ≥ 2 2: η ← p−1, T ← 4p log( nd ) 3: [w0]i ← n2d for all i ∈ [n], z ← 0 4: while ‖wt‖1 ≤ −1 do 5: gt ← max ( 0,1− 〈 Ai,V p−1 t 〉) entrywise, for Vt ← ∑ i∈[n][wt]iAi ‖∑i∈[n][wt]iAi‖p 6: wt+1 ← wt ◦ (1 + ηgt), Z← Z + (Vt)p−1, t← t+ 1 7: if t ≥ T then 8: return Y = Z‖Z‖q 9: end if 10: end while 11: return x = wt‖wt‖1 Theorem 5. Let p be odd. Algorithm 3 runs in O(p log(nd/ ) ) iterations, and its output solves Problem 3. Each iteration is implementable in O(nnz · p log(nd/ ) 2 ), where nnz is the number of nonzero entries amongst all {Ai}i∈[n], losing O( ) in the quality of Problem 3 with probability 1− poly((nd/ )−1). 4.4 Schatten packing with a `∞ constraint We remark that the framework outlined in Section 4.1 is flexible enough to handle mixed-norm packing problems. Specifically, developments in Section 5 require the following guarantee. Proposition 2. Following Theorem 5’s notation, let p be odd, {Ai}i∈[n] ∈ Sd≥0, 0 < = O(α), and min x∈∆n ‖x‖∞≤ 1+α n ‖A(x)‖p = OPT. (4) for A(x) := ∑ i∈[n] xiAi. Given estimate of OPT exponentially bounded in nd , there is a procedure calling Algorithm 7 O(log nd ) times giving x ∈ ∆ n with ‖x‖∞ ≤ (1+α)(1+ ) n , ‖A(x)‖p ≤ (1 + )OPT. Algorithm 7 runs in O( log(nd/ ) logn 2 ) iterations, each requiring time O(nnz · p log(nd/ ) 2 ). Our method, found in Appendix C, approximately solves (4) by first applying a standard binary search to placeA(x) on the right scale, for which it suffices to solve an approximate decision problem. Then, we apply a truncated mirror descent procedure on the potential Φ(w) = log(exp(‖A(w)‖p) + exp( n1+α ‖w‖∞)) − ‖w‖1, and prove correctness for solving the decision problem following the framework we outlined in Section 4.1. 5 Robust sub-Gaussian PCA in nearly-linear time We give our nearly-linear time robust PCA method, leveraging developments of Section 4. Throughout, we will be operating under Assumption 1, for some corruption parameter with log −1 log d = O(1); = O( 1log d log log d ) suffices. We now develop tools to prove Theorem 2. Algorithm 4 uses three subroutines: our earlier 1DRobustVariance method (Lemma 1), an application of our earlier Proposition 2 to approximate the solution to min w∈Sn ∥∥∥∥∥∥ ∑ i∈[n] wiXiX > i ∥∥∥∥∥∥ p , for p = Θ (√ log d log −1 ) , (5) and a method for computing approximate eigenvectors by [MM15] (discussed in Appendix D). Proposition 3. There is an algorithm Power (Algorithm 1, [MM15]), parameterized by t ∈ [d], tolerance ̃ > 0, p ≥ 1, and A ∈ Sd≥0, which outputs orthonormal {zj}j∈[t] with the guarantee∣∣z>j Apzj − λpj (A)∣∣ ≤ ̃λpj (A)∣∣∣z>j Ap−1zj − λp−1j (A)∣∣∣ ≤ ̃λp−1j (A) for all j ∈ [t]. (6) Here, λj(A) is the jth largest eigenvalue of A. The total time required by the method is O(nnz(A) tp log dε ). Algorithm 4 RobustPCA({Xi}i∈[n], , t) 1: Input: {Xi}i∈[n] = O( 1log d log log d ), t ∈ [d] with Σt+1 ≤ (1− γ)Σ for γ in Theorem 2 2: w ← BoxedSchattenPacking (Proposition 2) on {Ai = XiX>i }i∈[n], α← , p as in (5) 3: M = ∑ i∈[n] wiXiX > i 4: {zj}j∈[t] = Power(t, , p,M) 5: αj ← 1DRobustVariance({Xi}i∈[n],M p−1 2 zj/‖M p−1 2 zj‖2, ) for all j ∈ [t] 6: return zj∗ for j∗ = argmaxj∈[t]αj Algorithm 4 is computationally bottlenecked by the application of Proposition 2 on Line 2 and the t calls to 1DRobustVariance on Line 5, from which the runtime guarantee of Theorem 2 follows straightforwardly. To demonstrate correctness, we first certify the quality of the solution to (5). Lemma 5. Let n = Ω ( d+log δ−1 ( log −1)2 ) . With probability 1− δ2 , the uniform distribution over G attains value (1 + ̃2 )‖Σ‖p for objective (5), where ̃ = C ′ log −1 for a universal constant C ′ > 0. The proof of this is similar to results in e.g. [DKK+19, Li18], and combines concentration guarantees with a union bound over all possible corruption sets B. This implies the following immediately, upon applying the guarantees of Proposition 2. Corollary 1. Let w be the output of Line 2 of RobustPCA. Then, we have ‖w‖∞ ≤ 1 (1−2 )n , and∥∥∥∑i∈[n] wiXiX>i ∥∥∥ p ≤ (1 + ̃) ‖Σ‖p under the guarantee of Lemma 5. Let w be the output of the solver. Recall that M = ∑n i=1 wiXiX > i . Additionally, define MG := ∑ i∈G wiXiX > i , wG := ∑ i∈G wi, MB := ∑ i∈B wiXiX > i , wB := ∑ i∈G wi . (7) Notice in particular that M = MG + MB , and that all these matrices are PSD. We next prove the second, crucial fact, which says that MG is a good approximator to Σ in Loewner ordering: Lemma 6. Let n = Ω ( d+log δ−1 ( log −1)2 ) . With probability at least 1− δ2 , (1 + ̃)Σ MG (1− ̃)Σ. The proof combines the strategy in Lemma 5 with the SDP solver guarantee. Perhaps surprisingly, Corollary 1 and Lemma 6 are the only two properties about M that our final analysis of Theorem 2 will need. In particular, we have the following key geometric proposition, which carefully combines trace inequalities to argue that the corrupted points cannot create too many new large eigendirections. Proposition 4. Let M = MG + MB be so that ‖M‖p ≤ (1 + ̃) ‖Σ‖p, MG 0 and MB 0, and so that (1 + ̃)Σ MG (1− ̃)Σ. Following notation of Algorithm 4, let M = ∑ j∈[d] λjvjv > j , Σ = ∑ j∈[d] σjuju > j (8) be sorted eigendecompositions of M and Σ, so λ1 ≥ . . . ≥ λd, and σ1 ≥ . . . ≥ σd. Let γ be as in Theorem 2, and assume σt+1 < (1− γ)σ1. Then, max j∈[t] v>j Σvj ≥ (1− γ) ‖Σ‖∞ . With Proposition 4 in place, the recovery bound of Theorem 2 follows from an exact SVD. We show in Appendix D that the method is robust to approximations of the form (6), yielding our final claim. Broader Impact Our work provides frameworks for learning properties about the covariance of sub-Gaussian distributions which have been corrupted under noise. As a key subroutine, we develop solvers for smoothed positive linear and semidefinite programs. We believe these results are interesting from an academic perspective, e.g. our techniques may be applicable generally for robust statistics and convex optimization researchers. Moreover, because our primary results concern robustness of models to arbitrarily corrupted data, we believe our methods may have practical implications for downstream tasks where protection against a malicious adversary is warranted. Similarly, as our main subroutine is a solver attaining strong computational guarantees for a wider variety of objectives than was previously known, it is possible that our methods can be leveraged to broaden the types of downstream tasks that can be performed. Namely, as `p norm packing linear program solvers have found applications in fair resource allocation, our hope is that our smoothed and mixed-norm guarantee semidefinite solvers can find similar applications in learning algorithms for objectives designed with fairness or privacy in mind.
1. What are the main contributions of the paper in terms of the problems it addresses? 2. What are the strengths of the paper's approach to solving these problems? 3. Are there any weaknesses or limitations in the paper's methodology or conclusions? 4. How do the two problems addressed in the paper relate to each other?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studied Robust sub-Gaussian principal component analysis problem and Width-independent Schatten packing SDPs problem. The paper gave novel efficient algorithms for both problems. Strengths This paper shows several crucial primitives. For robust sub-gaussian problem, authors proposed an algorithm which uses samples (with corruptions) to estimate the variance for any given unit direction.The algorithm is very clean and simple. The authors show that one just needs to drop the most effective O(eps) samples and the remaining samples would give a good approximation. For L_p packing problem, author gave a crucial property of the potential function in the mirror descent procedure. Weaknesses My only concern is that the studied two problems are somewhat independent. It is not clear to me what the connection is.
NIPS
Title Robust Sub-Gaussian Principal Component Analysis and Width-Independent Schatten Packing Abstract We develop two methods for the following fundamental statistical task: given an -corrupted set of n samples from a d-dimensional sub-Gaussian distribution, return an approximate top eigenvector of the covariance matrix. Our first robust PCA algorithm runs in polynomial time, returns a 1−O( log −1)-approximate top eigenvector, and is based on a simple iterative filtering approach. Our second, which attains a slightly worse approximation factor, runs in nearly-linear time and sample complexity under a mild spectral gap assumption. These are the first polynomial-time algorithms yielding non-trivial information about the covariance of a corrupted sub-Gaussian distribution without requiring additional algebraic structure of moments. As a key technical tool, we develop the first widthindependent solvers for Schatten-p norm packing semidefinite programs, giving a (1 + )-approximate solution in O(p log( ) −1) input-sparsity time iterations (where n, d are problem dimensions). N/A −1) input-sparsity time iterations (where n, d are problem dimensions). 1 Introduction We study two natural, but seemingly unrelated, problems in high dimensional robust statistics and continuous optimization respectively. As we will see, these problems have an intimate connection. Problem 1: Robust sub-Gaussian principal component analysis. We consider the following statistical task, which we call robust sub-Gaussian principal component analysis (PCA). Given samples X1, . . . , Xn from sub-Gaussian1 distribution D with covariance Σ, an fraction of which are arbitrarily corrupted, the task asks to output unit vector u with u>Σu ≥ (1 − γ) ‖Σ‖∞2 for tolerance γ. Ergo, the goal is to robustly return a (1 − γ)-approximate top eigenvector of the covariance of sub-Gaussian D. This is the natural extension of PCA to the robust statistics setting. There has been a flurry of recent work on efficient algorithms for robust statistical tasks, e.g. covariance estimation and PCA. From an information-theoretic perspective, sub-Gaussian concentration suffices for robust covariance estimation. Nonetheless, to date all polynomial-time algorithms achieving nontrivial guarantees on covariance estimation (including PCA specifically) in the presence of adversarial noise require additional algebraic structure. For instance, sum-of-squares certifiably bounded moments have been leveraged in polynomial time covariance estimation algorithms [HL18, KSS18]; however, this is a stronger assumption than sub-Gaussianity. In many applications (see discussion in [DKK+17]), the end goal of covariance estimation is PCA. Thus, a natural question which relaxes robust covariance estimation is: can we robustly estimate the top eigenvector of the covariance Σ, assuming only sub-Gaussian concentration? Our work answers this question affirmatively via two incomparable algorithms. The first achieves γ = O( log −1) in 1See Section 2 for a formal definition. 2Throughout we use ‖M‖p to denote the Schatten p-norm (cf. Section 2 for more details). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. polynomial time; the second achieves γ = O( √ log −1 log d), in nearly-linear time under a mild gap assumption on Σ. Moreover, both methods have nearly-optimal sample complexity. Problem 2: Width-independent Schatten packing. We consider a natural generalization of packing semidefinite programs (SDPs) which we call Schatten packing. Given symmetric positive semidefinite A1, . . . ,An and parameter p ≥ 1, a Schatten packing SDP asks to solve the optimization problem min ∥∥∥∥∥∥ ∑ i∈[n] wiAi ∥∥∥∥∥∥ p subject to w ∈ ∆n. (1) Here, ‖M‖p is the Schatten-p norm of matrix M and ∆n is the probability simplex (see Section 2). When p = ∞, (1) is the well-studied (standard) packing SDP objective [JY11, ALO16, PTZ16], which asks to find the most spectrally bounded convex combination of packing matrices. For smaller p, the objective encourages combinations more (spectrally) uniformly distributed over directions. The specialization of (1) to diagonal matrices is a smooth generalization of packing linear programs, previously studied in the context of fair resource allocation [MSZ16, DFO18]. For the `∞ case of (1), packing SDPs have the desirable property of admitting “width-independent” approximation algorithms via exploiting positivity structure. Specifically, width-independent solvers obtain multiplicative approximations with runtimes independent or logarithmically dependent on size parameters of the problem. This is a strengthening of additive notions of approximation typically used for approximate semidefinite programming. Our work gives the first width-independent solver for Schatten packing. 1.1 Previous work Learning with adversarial outliers. The study of estimators robust to a small fraction of adversarial outliers dates back to foundational work, e.g. [Hub64, Tuk75]. Following more recent work [LRV16, DKK+19], there has been significant interest in efficient, robust algorithms for statistical tasks in high-dimensional settings. We focus on methods robustly estimating covariance properties here, and defer a thorough discussion of the (extensive) robust statistics literature to [Ste18, Li18, DK19]. There has been quite a bit of work in understanding and giving guarantees for robust covariance estimation where the uncorrupted distribution is exactly Gaussian [DKK+17, DKK+18, DKK+19, CDGW19]. These algorithms strongly use relationships between higher-order moments of Gaussian distributions via Isserlis’ theorem. Departing from the Gaussian setting, work of [LRV16] showed that if the distribution is an affine transformation of a 4-wise independent distribution, robust covariance estimation is possible. This was extended by [KSS18], which also assumed nontrivial structure in the moments of the distribution, namely that sub-Gaussianity was certifiable via the sum-of-squares proof system. To the best of our knowledge it has remained open to give nontrivial guarantees for robust estimation of any covariance properties under minimal assumptions, i.e. sub-Gaussian concentration. All aforementioned algorithms also yield guarantees for robust PCA, by applying a top eigenvector method to the learned covariance. However, performing robust PCA via the intermediate covariance estimation step is lossy, both statistically and computationally. From a statistical perspective, Ω(d2) samples are necessary to learn the covariance of a d-dimensional Gaussian in Frobenius norm (and for known efficient algorithms for spectral norm error [DKS17]); in contrast, O(d) samples suffice for (non-robust) PCA. Computationally, even when the underlying distrubution is exactly Gaussian, the best-known covariance estimation algorithms run in time Ω(d3.25); algorithms working in more general settings based on the sum-of-squares approach require much more time. In contrast, the power method for PCA in a d× d matrix takes time Õ(d2)3. Motivated by this, our work initiates the direct study of robust PCA, which is often independently interesting in applications. We remark there is another problem termed “robust PCA” in the literature, e.g. [CLMW11], under a different generative model. We defer a detailed discussion to [DKK+17], which experimentally shows that algorithms from that line of work do not transfer well to our corruption model. Width-independent iterative methods. Semidefinite programming (SDP) and its linear programming specialization are fundamental computational tasks, with myriad applications in learning, operations research, and computer science. Though general-purpose polynomial time algorithms exist for 3We say g = Õ(f) if g = O(f logc f) for some constant c > 0. SDPs ([NN94]), in practical settings in high dimensions, approximations depending linearly on input size and polynomially on error are sometimes desirable. To this end, approximation algorithms based on entropic mirror descent have been intensely studied [WK06, AK16, GHM15, AL17, CDST19], obtaining additive approximations to the objective with runtimes depending polynomially on ρ/ , where ρ is the “width”, the largest spectral norm of a constraint. For structured SDPs, stronger guarantees can be obtained in terms of width. Specifically, several algorithms developed for packing SDPs ((1) with p =∞) yield (1+ )-multiplicative approximations to the objective, with logarithmic dependence on width [JY11, PTZ16, ALO16, JLL+20]. As ρ upper bounds objective value in this setting, in the worst case runtimes of width-dependent solvers yielding ρ-additive approximations have similar dependences as width-independent counterparts. Widthindependent solvers simultaneously yield stronger multiplicative bounds at all scales of objective value, making them desirable in suitable applications. In particular, `∞ packing SDPs have found great utility in robust statistics algorithm design [CG18, CDG19, CDGW19, DL19]. Beyond `∞ packing, width-independent guarantees in the SDP literature are few and far between; to our knowledge, other than the covering and mixed solvers of [JLL+20], ours is the first such guarantee for a broader family of objectives4. Our method complements analogous `p extensions in the width-dependent setting, e.g. [ALO15], as well as width-independent solvers for `p packing linear programs [MSZ16, DFO18]. We highlight the fair packing solvers of [MSZ16, DFO18], motivated by problems in equitable resource allocation, which further solved `p packing variants for p 6∈ [1,∞). We find analogous problems in semidefinite settings interesting, and defer to future work. Concurrent work. Concurrent work by Kong et al. [KSKO20] also develops a PCA algorithm tolerant to a bounded fraction of adversarial corruption. Their method is similar to our algorithm based on soft downweighting (Algorithm 6), is analyzed under a fourth moment bound assumption (as opposed to sub-Gaussianity as in this paper), and also generalizes to top-k eigenvector estimation. To our knowledge, our fast algorithm (Algorithm 4) is the first in the literature which robustly solves the 1-PCA problem in near-linear time (for gapped covariances), at the cost of weaker error bounds. 1.2 Our results Robust sub-Gaussian principal component analysis. We give two algorithms for robust subGaussian PCA5. Both are sample optimal, polynomial-time, and assume only sub-Gaussianity. The first is via a simple filtering approach, as summarized in the following (and developed in Section 3). Theorem 1. Under Assumption 1, let δ ∈ [0, 1], and n = Ω ( d+log δ−1 ( log −1)2 ) . Algorithm 6 runs in time O(nd 2 log n δ log n δ ), and outputs u with u >Σu > (1− C? log −1)‖Σ‖∞, for C? a fixed multiple of parameter c in Assumption 1, with probability at least 1− δ. Our second algorithm is more efficient under mild conditions, but yields a worse approximation 1− γ for γ = O( √ log −1 log d). Specifically, if there are few eigenvalues of Σ larger than 1− γ, our algorithm runs in nearly-linear time. Note that if there are many eigenvalues above this threshold, then the PCA problem itself is not very well-posed; our algorithm is very efficient in the interesting setting where the approximate top eigenvector is identifiable. We state our main algorithmic guarantee here, and defer details to Section 5. Theorem 2. Under Assumption 1, let δ ∈ [0, 1], n = Ω ( d+log δ−1 ( log −1)2 ) , γ = C √ log −1 log d, for C a fixed multiple of parameter c from Assumption 1, and let t ∈ [d] satisfy Σt+1 < (1− γ) ‖Σ‖∞. Algorithm 4 outputs a unit vector u ∈ Rd with u>Σu ≥ (1− γ)‖Σ‖∞ in time Õ( nd 4.5 + ndt 1.5 ). Since Ω(d −2) samples are necessary for a (1− )-approximation to the top eigenvector of Σ via uncorrupted samples, our first method is sample-optimal, as is our second up to a Õ( −1) factor. Width-independent Schatten packing. Our second method crucially requires an efficient solver for Schatten packing SDPs. We demonstrate that Schatten packing, i.e. (1) for arbitrary p, admits width-independent solvers. We state an informal guarantee, and defer details to Section 4. 4In concurrent and independent work, [CMY20] develops width-independent solvers for Ky-Fan packing objectives, a different notion of generalization than the Schatten packing objectives we consider. 5We follow the distribution and corruption model described in Assumption 1. Theorem 3. Let {Ai}i∈[n] ∈ Sd≥0, and > 0. There is an algorithm taking O( p log(nd ) ) iterations, returning a 1 + multiplicative approximation to the problem (1). For odd p, each iteration can be implemented in time nearly-linear in the number of nonzeros amongst all {Ai}i∈[n]. 2 Preliminaries General notation. [n] denotes the set 1 ≤ i ≤ n. The operation ◦ applied to two vectors of equal dimension is their entrywise product. Applied to a vector, ‖·‖p is the `p norm; applied to a symmetric matrix, ‖·‖p is the Schatten-p norm, i.e. the `p norm of the spectrum. The dual norm of `p is `q for q = pp−1 ; when p = ∞, q = 1. ∆ n is the n-dimensional simplex (subset of positive orthant with `1-norm 1) and we define Snε ⊆ ∆n to be the truncated simplex: Snε := { w ∈ Rn≥0 ∣∣∣∣∣ ‖w‖1 = 1, w ≤ 1n(1− ε) entrywise } . (2) Matrices. Sd is d × d symmetric matrices, and Sd≥0 is the positive semidefinite subset. I is the identity of appropriate dimension. λmax, λmin, and Tr are the largest and smallest eigenvalues and trace of a symmetric matrix. For M,N ∈ Sd, 〈M,N〉 := Tr (MN) and we use the Loewner order , (M N iff N−M ∈ Sd≥0). The seminorm of M 0 is ‖v‖M := √ v>Mv. Fact 1. For A, B with compatible dimension, Tr(AB) = Tr(BA). For M,N ∈ Sd≥0, 〈M,N〉 ≥ 0. Fact 2. We have the following characterization of the Schatten-p norm: for M ∈ Sd, and q = pp−1 , ‖M‖p = sup N∈Sd, ‖N‖q=1 〈N,M〉 . For M = ∑ j∈[d] λiviv > i , the satisfying N is ∑ j∈[d]±λ p−1 i viv > i ‖M‖p−1p , so NM has spectrum |λ| p ‖M‖p−1p . Distributions. We denote drawing vector X from distribution D by X ∼ D, and the covariance Σ of D is EX∼D [ XX> ] . We say scalar distribution D is γ2-sub-Gaussian if EX∼D[X] = 0 and EX∼D [exp (tX)] ≤ exp ( t2γ2 2 ) ∀t ∈ R. Multivariate D has sub-Gaussian proxy Γ if its restriction to any unit v is ‖v‖2Γ-sub-Gaussian, i.e. EX∼D [ exp ( tX>v )] ≤ exp ( t2 ‖v‖2Γ 2 ) for all ‖v‖2 = 1, t ∈ R. (3) We consider the following standard model for gross corruption with respect to a distribution D. Assumption 1 (Corruption model, see [DKK+19]). Let D be a mean-zero distribution on Rd with covariance Σ and sub-Gaussian proxy Γ cΣ for a constant c. Denote by index set G′ with |G′| = n a set of (uncorrupted) samples {Xi}i∈G′ ∼ D. An adversary arbitrarily replaces n points in G′; we denote the new index set by [n] = B ∪G, where B is the (unknown) set of points added by an adversary, and G ⊆ G′ is the set of points from G′ that were not changed. As we only estimate covariance properties, the assumption that D is mean-zero only loses constants in problem parameters, by pairing samples and subtracting them (cf. [DKK+19], Section 4.5.1). 3 Robust sub-Gaussian PCA via filtering In this section, we sketch the proof of Theorem 1, which gives guarantees on our filtering algorithm for robust sub-Gaussian PCA. This algorithm obtains stronger statistical guarantees than Theorem 2, at the cost of super-linear runtime; the algorithm is given as Algorithm 6. Our analysis stems largely from concentration facts about sub-Gaussian distributions, as well as the following (folklore) fact regarding estimation of variance along any particular direction. Lemma 1. Under Assumption 1, let δ ∈ [0, 1], n = Ω ( log δ−1 ( log −1)2 ) , and u ∈ Rd be a fixed unit vector. Algorithm 5, 1DRobustVariance, takes input {Xi}i∈[n], u, and , and outputs σ2u with |u>Σu−σ2u| < Cu>Σu · log −1 with probability at least 1− δ, and runs in time O(nd+n log n), for C a fixed multiple of the parameter c in Assumption 1. In other words, we show that using corrupted samples, we can efficiently estimate a 1 +O( log −1)- multiplicative approximation of the variance of D in any unit direction6. This proof is deferred to Appendix B for completeness. Algorithm 6 combines this key insight with a soft filtering approach which has found many applications in the recent robust statistics literature, suggested by the following known structural fact found in previous work (e.g. Lemma A.1 of [DHL19], see also [SCV17, Ste18]). Lemma 2. Let {ai}i∈[m], {wi}i∈[m] be sets of nonnegative reals, and amax = maxi∈[m] ai. Define w′i = ( 1− aiamax ) wi, for all i ∈ [m]. Consider any disjoint partition IB , IG of [m] with∑ i∈IB wiai > ∑ i∈IG wiai. Then, ∑ i∈IB wi − w ′ i > 1 2amax ∑ i∈[m] wiai > ∑ i∈IG wi − w ′ i. Our Algorithm 6, PCAFilter, takes as input a set of corrupted samples {Xi}i∈[n] following Assumption 1 and the corruption parameter . At a high level, it initializes a uniform weight vector w(0), and iteratively operates as follows (we denote by M(w) the empirical covariance ∑ i∈[n] wiXiX > i ). 1. ut ← approximate top eigenvector of M(w(t−1)) via power iteration. 2. Compute σ2t ← 1DRobustVariance({Xi}i∈[n], ut, ). 3. If σ2t > (1−O( log −1)) · u>t M(w(t−1))ut, then terminate and return ut. 4. Else: (a) Sort indices i ∈ [n] by ai ← 〈ut, Xi〉2, with a1 smallest. (b) Let ` ≤ i ≤ n be the smallest set for which ∑n i=` wi ≥ 2 , and apply the downweight- ing procedure of Lemma 2 to this subset of indices. The analysis of Algorithm 6 then proceeds in two stages. Monotonicity of downweighting. We show the invariant criteria for Lemma 2 (namely, that for the set ` ≤ i ≤ n in every iteration, there is more spectral mass on bad points than good) holds inductively for our algorithm. Specifically, lack of termination implies M(w(t−1)) puts significant mass on bad directions, which combined with concentration of good directions yields the invariant. The details of this argument can be found as Lemma 11. Roughly uniform weightings imply approximation quality. As Lemma 2 then applies, the procedure always removes more mass from bad points than good, and thus can only remove at most 2 mass total by the corruption model. Thus, the weights w(t) are always roughly uniform (in SnO( )), which by standard concentration facts (see Appendix A) imply the quality of the approximate top eigenvector is good. Moreover, the iteration count is bounded by roughly d because whenever the algorithm does not terminate, enough mass is removed from large spectral directions. Combining with the termination criteria imply that when a vector is returned, it is a close approximation to the top direction of Σ. Details can be found as Lemma 13 and in the proof of Theorem 1. 4 Schatten packing For our second robust PCA algorithm, developed in Section 5, we require a key technical tool which we now develop in this section. The tool, Schatten-norm packing semidefinite programs (and hybrid-norm extensions), is a smoothed generalization of the classical packing semidefinite program, which may be of independent interest in other applications. At a high level, the reason Schatten packing solvers are useful for the robust PCA problem is because while an adversary can fool a PCA algorithm based on operator-norm semidefinite programs by “promoting” a single other eigenvector to have a larger variance, a p-norm-based semidefinite program forces a tradeoff between the number of directions promoted and the amount of variance introduced. 6Corollary 4 gives a slightly stronger guarantee that reusing samples does not break dependencies of u. 4.1 Mirror descent interpretation of [MRWZ16] We begin by reinterpreting the [MRWZ16] solver, which achieves the state-of-the-art parallel runtime for packing LPs7. An (`∞) packing LP algorithm solves the following decision problem.8. Problem 1 (`∞ packing linear program). Given entrywise nonnegative A ∈ Rd×n≥0 , either find primal solution x ∈ ∆n with ‖Ax‖∞ ≤ 1 + or dual solution y ∈ ∆d with A>y ≥ (1− )1. Algorithm 1 PackingLP(A, ) 1: Input: A ∈ Rd×n≥0 , ∈ [0, 1 2 ] 2: K ← 3 log(d) , η ← K −1, T ← 4 log(d) log(nd/ ) 2 3: [w0]i ← n2d for all i ∈ [n], z ← 0, t← 0 4: while Awt ≤ K1, ‖wt‖1 ≤ K do 5: vt ← exp(Awt)‖exp(Awt)‖1 6: gt ← max(0,1−A>vt) entrywise 7: wt+1 ← wt ◦ (1 + ηgt), z ← z + vt, t← t+ 1 8: if t ≥ T then 9: return y ← 1T z 10: end if 11: end while 12: return x← wt‖wt‖1 The following result is shown in [MRWZ16]. Proposition 1. PackingLP (Algorithm 1) solves Problem 1 in O(nnz(A) · log(d) log(nd/ ) 2 ) time. Our interpretation of the analysis of [MRWZ16] combines two ingredients: a potential argument and mirror descent (alternatively known as the “multiplicative weights” framework), which yields a dual feasible point if ‖wt‖1 did not grow sufficiently. Potential argument. The potential used by [MRWZ16] is log( ∑ j∈[d] exp([Awt]j))− ‖wt‖1, wellknown to be a O(log d)-additive approximation of ‖Awt‖∞−‖wt‖1. As soon as ‖Awt‖∞ or ‖wt‖1 reaches the scale O( log d ), by nonnegativity this becomes a multiplicative guarantee, motivating the setting of threshold K. To prove the potential is monotone, [MRWZ16] uses step size K−1 and a Taylor approximation; combining with the termination condition yields the desired claim. Mirror descent. To certify that wt grows sufficiently (e.g. the method terminates in few iterations, else dual feasibility holds), we interpret the step wt+1 ← wt ◦ (1 + ηgt) as approximate entropic mirror descent. Specifically, we track the quantity ∑ 0≤t<T 〈ηgt, u〉, and show that if ‖wt‖1 has not grown sufficiently, then it must be bounded for every u ∈ ∆n, certifying dual feasibility. Formally, for any gt sequence and u ∈ ∆n, we show O(log(nd/ )) + log ( ‖wT ‖1 ‖w0‖1 ) ≥ ∑ 0≤t<T 〈ηgt, u〉 ≥ η ∑ 0≤t<T 〈 1−A>vt, u 〉 . The last inequality followed by gt being an upwards truncation. If ‖wT ‖1 is bounded (else, we have primal feasibility), we show the entire above expression is bounded O(log nd ) for any u. Thus, by setting T = O( log(nd/ )η ) and choosing u to be each coordinate indicator, it follows that the average of all vt is coordinatewise at least 1− , and solves Problem 1 as a dual solution. Our gt is the (truncated) gradient of the function used in the potential analysis, so its form allows us to interpret dual feasibility (e.g. vt has `1 norm 1 and is a valid dual point). Our analysis patterns standard mirror descent, complemented by side information which says that lack of a primal solution can transform a regret guarantee into a feasibility bound. We apply this framework to analyze `p 7The [MRWZ16] solver also generalizes to covering and mixed objectives; we focus on packing in this work. 8Packing linear programs are sometimes expressed as the optimization problem maxx≥0,Ax≤1 ‖x‖1, simi- larly to (1); these problems are equivalent up to a standard binary search, see e.g. discussion in [JLL+20]. variants of Problem 1, via different potentials; our proofs are quite straightforward upon adopting this perspective, and we believe it may yield new insights for instances with positivity structure. 4.2 `p-norm packing linear programs In this section, we give an example of the framework proposed in Section 4.1, for approximately solving `p norm packing linear programs. Specifically, we now consider the generalization of Problem 1 to `p norms; throughout, q = pp−1 is the dual norm. Problem 2 (`p packing linear program). Given entrywise nonnegative A ∈ Rd×n≥0 , either find primal solution x ∈ ∆n with ‖Ax‖p ≤ 1 + or dual solution y ∈ Rd≥0, ‖y‖q = 1 with A>y ≥ (1− )1. For p = log d , Problem 2 recovers Problem 1 up to constants as `p multiplicatively approximates `∞ by 1 + . We now state our method for solving Problem 2 as Algorithm 2. Algorithm 2 PNormPacking(A, , p) 1: Input: A ∈ Rd×n≥0 , ∈ [0, 1 2 ], p ≥ 2 2: η ← p−1, T ← 4p log( nd ) 3: [w0]i ← n2d for all i ∈ [n], z ← 0, t← 0 4: while ‖wt‖1 ≤ −1 do 5: gt ← max(0,1−A>(vt)p−1) entrywise, for vt ← Awt‖Awt‖p 6: wt+1 ← wt ◦ (1 + ηgt), z ← z + (vt)p−1, t← t+ 1 7: if t ≥ T then 8: return y = z‖z‖q 9: end if 10: end while 11: return x = wt‖wt‖1 Other than changing parameters, the only difference from Algorithm 1 is that v is a point with unit `q norm induced by the gradient of our potential Φt. We state our main potential fact, whose proof is based straightforwardly on Taylor expanding ‖·‖p, and deferred to Appendix C for brevity. Lemma 3. In all iterations t of Algorithm 2, defining Φt := ‖Awt‖p − ‖wt‖1, Φt+1 ≤ Φt. We now state our main result, which leverages the potential bound following the framework of Section 4.1. A proof can be found in Appendix C. Theorem 4. Algorithm 2 runs in time O(nnz(A) · p log(nd/ ) ). Further, its output solves Problem 2. 4.3 Schatten-norm packing semidefinite programs We generalize Algorithm 2 to solve Schatten packing semidefinite programs, which we now define. Problem 3. Given {Ai}i∈[n] ∈ Sd≥0, either find primal solution x ∈ ∆n with ∥∥∥∑i∈[n] xiAi∥∥∥ p ≤ 1 + or dual solution Y ∈ Sd≥0, ‖Y‖q = 1 with 〈Ai,Y〉 ≥ 1− for all i ∈ [n]. We assume that p is an odd integer for simplicity (sufficient for our applications), and leave for interesting future work the cases when p is even or noninteger. The potential used in the analysis and an overall guarantee are stated here, and deferred to Appendix C. The proofs are simple modifications of Lemma 3 and Theorem 4 using trace inequalities (similar to those in [JLL+20]) in place of scalar inequalities, as well as efficient approximation of quantities in Line 5 via the standard technique of Johnson-Lindestrauss projections. Lemma 4. In all iterations t of Algorithm 3, defining Φt := ∥∥∥∑i∈[n][wt]iAi∥∥∥ p −‖wt‖1, Φt+1 ≤ Φt. Algorithm 3 SchattenPacking({Ai}i∈[n], , p) 1: Input: {Ai}i∈[n] ∈ Sd≥0, ∈ [0, 12 ], p ≥ 2 2: η ← p−1, T ← 4p log( nd ) 3: [w0]i ← n2d for all i ∈ [n], z ← 0 4: while ‖wt‖1 ≤ −1 do 5: gt ← max ( 0,1− 〈 Ai,V p−1 t 〉) entrywise, for Vt ← ∑ i∈[n][wt]iAi ‖∑i∈[n][wt]iAi‖p 6: wt+1 ← wt ◦ (1 + ηgt), Z← Z + (Vt)p−1, t← t+ 1 7: if t ≥ T then 8: return Y = Z‖Z‖q 9: end if 10: end while 11: return x = wt‖wt‖1 Theorem 5. Let p be odd. Algorithm 3 runs in O(p log(nd/ ) ) iterations, and its output solves Problem 3. Each iteration is implementable in O(nnz · p log(nd/ ) 2 ), where nnz is the number of nonzero entries amongst all {Ai}i∈[n], losing O( ) in the quality of Problem 3 with probability 1− poly((nd/ )−1). 4.4 Schatten packing with a `∞ constraint We remark that the framework outlined in Section 4.1 is flexible enough to handle mixed-norm packing problems. Specifically, developments in Section 5 require the following guarantee. Proposition 2. Following Theorem 5’s notation, let p be odd, {Ai}i∈[n] ∈ Sd≥0, 0 < = O(α), and min x∈∆n ‖x‖∞≤ 1+α n ‖A(x)‖p = OPT. (4) for A(x) := ∑ i∈[n] xiAi. Given estimate of OPT exponentially bounded in nd , there is a procedure calling Algorithm 7 O(log nd ) times giving x ∈ ∆ n with ‖x‖∞ ≤ (1+α)(1+ ) n , ‖A(x)‖p ≤ (1 + )OPT. Algorithm 7 runs in O( log(nd/ ) logn 2 ) iterations, each requiring time O(nnz · p log(nd/ ) 2 ). Our method, found in Appendix C, approximately solves (4) by first applying a standard binary search to placeA(x) on the right scale, for which it suffices to solve an approximate decision problem. Then, we apply a truncated mirror descent procedure on the potential Φ(w) = log(exp(‖A(w)‖p) + exp( n1+α ‖w‖∞)) − ‖w‖1, and prove correctness for solving the decision problem following the framework we outlined in Section 4.1. 5 Robust sub-Gaussian PCA in nearly-linear time We give our nearly-linear time robust PCA method, leveraging developments of Section 4. Throughout, we will be operating under Assumption 1, for some corruption parameter with log −1 log d = O(1); = O( 1log d log log d ) suffices. We now develop tools to prove Theorem 2. Algorithm 4 uses three subroutines: our earlier 1DRobustVariance method (Lemma 1), an application of our earlier Proposition 2 to approximate the solution to min w∈Sn ∥∥∥∥∥∥ ∑ i∈[n] wiXiX > i ∥∥∥∥∥∥ p , for p = Θ (√ log d log −1 ) , (5) and a method for computing approximate eigenvectors by [MM15] (discussed in Appendix D). Proposition 3. There is an algorithm Power (Algorithm 1, [MM15]), parameterized by t ∈ [d], tolerance ̃ > 0, p ≥ 1, and A ∈ Sd≥0, which outputs orthonormal {zj}j∈[t] with the guarantee∣∣z>j Apzj − λpj (A)∣∣ ≤ ̃λpj (A)∣∣∣z>j Ap−1zj − λp−1j (A)∣∣∣ ≤ ̃λp−1j (A) for all j ∈ [t]. (6) Here, λj(A) is the jth largest eigenvalue of A. The total time required by the method is O(nnz(A) tp log dε ). Algorithm 4 RobustPCA({Xi}i∈[n], , t) 1: Input: {Xi}i∈[n] = O( 1log d log log d ), t ∈ [d] with Σt+1 ≤ (1− γ)Σ for γ in Theorem 2 2: w ← BoxedSchattenPacking (Proposition 2) on {Ai = XiX>i }i∈[n], α← , p as in (5) 3: M = ∑ i∈[n] wiXiX > i 4: {zj}j∈[t] = Power(t, , p,M) 5: αj ← 1DRobustVariance({Xi}i∈[n],M p−1 2 zj/‖M p−1 2 zj‖2, ) for all j ∈ [t] 6: return zj∗ for j∗ = argmaxj∈[t]αj Algorithm 4 is computationally bottlenecked by the application of Proposition 2 on Line 2 and the t calls to 1DRobustVariance on Line 5, from which the runtime guarantee of Theorem 2 follows straightforwardly. To demonstrate correctness, we first certify the quality of the solution to (5). Lemma 5. Let n = Ω ( d+log δ−1 ( log −1)2 ) . With probability 1− δ2 , the uniform distribution over G attains value (1 + ̃2 )‖Σ‖p for objective (5), where ̃ = C ′ log −1 for a universal constant C ′ > 0. The proof of this is similar to results in e.g. [DKK+19, Li18], and combines concentration guarantees with a union bound over all possible corruption sets B. This implies the following immediately, upon applying the guarantees of Proposition 2. Corollary 1. Let w be the output of Line 2 of RobustPCA. Then, we have ‖w‖∞ ≤ 1 (1−2 )n , and∥∥∥∑i∈[n] wiXiX>i ∥∥∥ p ≤ (1 + ̃) ‖Σ‖p under the guarantee of Lemma 5. Let w be the output of the solver. Recall that M = ∑n i=1 wiXiX > i . Additionally, define MG := ∑ i∈G wiXiX > i , wG := ∑ i∈G wi, MB := ∑ i∈B wiXiX > i , wB := ∑ i∈G wi . (7) Notice in particular that M = MG + MB , and that all these matrices are PSD. We next prove the second, crucial fact, which says that MG is a good approximator to Σ in Loewner ordering: Lemma 6. Let n = Ω ( d+log δ−1 ( log −1)2 ) . With probability at least 1− δ2 , (1 + ̃)Σ MG (1− ̃)Σ. The proof combines the strategy in Lemma 5 with the SDP solver guarantee. Perhaps surprisingly, Corollary 1 and Lemma 6 are the only two properties about M that our final analysis of Theorem 2 will need. In particular, we have the following key geometric proposition, which carefully combines trace inequalities to argue that the corrupted points cannot create too many new large eigendirections. Proposition 4. Let M = MG + MB be so that ‖M‖p ≤ (1 + ̃) ‖Σ‖p, MG 0 and MB 0, and so that (1 + ̃)Σ MG (1− ̃)Σ. Following notation of Algorithm 4, let M = ∑ j∈[d] λjvjv > j , Σ = ∑ j∈[d] σjuju > j (8) be sorted eigendecompositions of M and Σ, so λ1 ≥ . . . ≥ λd, and σ1 ≥ . . . ≥ σd. Let γ be as in Theorem 2, and assume σt+1 < (1− γ)σ1. Then, max j∈[t] v>j Σvj ≥ (1− γ) ‖Σ‖∞ . With Proposition 4 in place, the recovery bound of Theorem 2 follows from an exact SVD. We show in Appendix D that the method is robust to approximations of the form (6), yielding our final claim. Broader Impact Our work provides frameworks for learning properties about the covariance of sub-Gaussian distributions which have been corrupted under noise. As a key subroutine, we develop solvers for smoothed positive linear and semidefinite programs. We believe these results are interesting from an academic perspective, e.g. our techniques may be applicable generally for robust statistics and convex optimization researchers. Moreover, because our primary results concern robustness of models to arbitrarily corrupted data, we believe our methods may have practical implications for downstream tasks where protection against a malicious adversary is warranted. Similarly, as our main subroutine is a solver attaining strong computational guarantees for a wider variety of objectives than was previously known, it is possible that our methods can be leveraged to broaden the types of downstream tasks that can be performed. Namely, as `p norm packing linear program solvers have found applications in fair resource allocation, our hope is that our smoothed and mixed-norm guarantee semidefinite solvers can find similar applications in learning algorithms for objectives designed with fairness or privacy in mind.
1. What are the key contributions and novel aspects introduced by the paper in estimating Eigenvectors of covariance? 2. What are the strengths of the proposed algorithms, particularly their time complexity improvements? 3. Are there any weaknesses or limitations in the paper's approaches or claims?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes to estimate the Eigenvector of covariance with sub-Gaussian concentration. Two new algorithms are presented to handle this problem in polynomial time and in nearly-linear time. Besides, a key technical tool, Width-independent Schatten packing SDPs, is developed for the nearly-linear time method. Strengths The strengths of this work including proposing two new methods to estimate the top eigenvector of the covariance with sub-Gaussian concentration, a theoritical analysis on the time-complexity of the new algorithms. and a novel width-independent iterative methods. Weaknesses /
NIPS
Title Robust Sub-Gaussian Principal Component Analysis and Width-Independent Schatten Packing Abstract We develop two methods for the following fundamental statistical task: given an -corrupted set of n samples from a d-dimensional sub-Gaussian distribution, return an approximate top eigenvector of the covariance matrix. Our first robust PCA algorithm runs in polynomial time, returns a 1−O( log −1)-approximate top eigenvector, and is based on a simple iterative filtering approach. Our second, which attains a slightly worse approximation factor, runs in nearly-linear time and sample complexity under a mild spectral gap assumption. These are the first polynomial-time algorithms yielding non-trivial information about the covariance of a corrupted sub-Gaussian distribution without requiring additional algebraic structure of moments. As a key technical tool, we develop the first widthindependent solvers for Schatten-p norm packing semidefinite programs, giving a (1 + )-approximate solution in O(p log( ) −1) input-sparsity time iterations (where n, d are problem dimensions). N/A −1) input-sparsity time iterations (where n, d are problem dimensions). 1 Introduction We study two natural, but seemingly unrelated, problems in high dimensional robust statistics and continuous optimization respectively. As we will see, these problems have an intimate connection. Problem 1: Robust sub-Gaussian principal component analysis. We consider the following statistical task, which we call robust sub-Gaussian principal component analysis (PCA). Given samples X1, . . . , Xn from sub-Gaussian1 distribution D with covariance Σ, an fraction of which are arbitrarily corrupted, the task asks to output unit vector u with u>Σu ≥ (1 − γ) ‖Σ‖∞2 for tolerance γ. Ergo, the goal is to robustly return a (1 − γ)-approximate top eigenvector of the covariance of sub-Gaussian D. This is the natural extension of PCA to the robust statistics setting. There has been a flurry of recent work on efficient algorithms for robust statistical tasks, e.g. covariance estimation and PCA. From an information-theoretic perspective, sub-Gaussian concentration suffices for robust covariance estimation. Nonetheless, to date all polynomial-time algorithms achieving nontrivial guarantees on covariance estimation (including PCA specifically) in the presence of adversarial noise require additional algebraic structure. For instance, sum-of-squares certifiably bounded moments have been leveraged in polynomial time covariance estimation algorithms [HL18, KSS18]; however, this is a stronger assumption than sub-Gaussianity. In many applications (see discussion in [DKK+17]), the end goal of covariance estimation is PCA. Thus, a natural question which relaxes robust covariance estimation is: can we robustly estimate the top eigenvector of the covariance Σ, assuming only sub-Gaussian concentration? Our work answers this question affirmatively via two incomparable algorithms. The first achieves γ = O( log −1) in 1See Section 2 for a formal definition. 2Throughout we use ‖M‖p to denote the Schatten p-norm (cf. Section 2 for more details). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. polynomial time; the second achieves γ = O( √ log −1 log d), in nearly-linear time under a mild gap assumption on Σ. Moreover, both methods have nearly-optimal sample complexity. Problem 2: Width-independent Schatten packing. We consider a natural generalization of packing semidefinite programs (SDPs) which we call Schatten packing. Given symmetric positive semidefinite A1, . . . ,An and parameter p ≥ 1, a Schatten packing SDP asks to solve the optimization problem min ∥∥∥∥∥∥ ∑ i∈[n] wiAi ∥∥∥∥∥∥ p subject to w ∈ ∆n. (1) Here, ‖M‖p is the Schatten-p norm of matrix M and ∆n is the probability simplex (see Section 2). When p = ∞, (1) is the well-studied (standard) packing SDP objective [JY11, ALO16, PTZ16], which asks to find the most spectrally bounded convex combination of packing matrices. For smaller p, the objective encourages combinations more (spectrally) uniformly distributed over directions. The specialization of (1) to diagonal matrices is a smooth generalization of packing linear programs, previously studied in the context of fair resource allocation [MSZ16, DFO18]. For the `∞ case of (1), packing SDPs have the desirable property of admitting “width-independent” approximation algorithms via exploiting positivity structure. Specifically, width-independent solvers obtain multiplicative approximations with runtimes independent or logarithmically dependent on size parameters of the problem. This is a strengthening of additive notions of approximation typically used for approximate semidefinite programming. Our work gives the first width-independent solver for Schatten packing. 1.1 Previous work Learning with adversarial outliers. The study of estimators robust to a small fraction of adversarial outliers dates back to foundational work, e.g. [Hub64, Tuk75]. Following more recent work [LRV16, DKK+19], there has been significant interest in efficient, robust algorithms for statistical tasks in high-dimensional settings. We focus on methods robustly estimating covariance properties here, and defer a thorough discussion of the (extensive) robust statistics literature to [Ste18, Li18, DK19]. There has been quite a bit of work in understanding and giving guarantees for robust covariance estimation where the uncorrupted distribution is exactly Gaussian [DKK+17, DKK+18, DKK+19, CDGW19]. These algorithms strongly use relationships between higher-order moments of Gaussian distributions via Isserlis’ theorem. Departing from the Gaussian setting, work of [LRV16] showed that if the distribution is an affine transformation of a 4-wise independent distribution, robust covariance estimation is possible. This was extended by [KSS18], which also assumed nontrivial structure in the moments of the distribution, namely that sub-Gaussianity was certifiable via the sum-of-squares proof system. To the best of our knowledge it has remained open to give nontrivial guarantees for robust estimation of any covariance properties under minimal assumptions, i.e. sub-Gaussian concentration. All aforementioned algorithms also yield guarantees for robust PCA, by applying a top eigenvector method to the learned covariance. However, performing robust PCA via the intermediate covariance estimation step is lossy, both statistically and computationally. From a statistical perspective, Ω(d2) samples are necessary to learn the covariance of a d-dimensional Gaussian in Frobenius norm (and for known efficient algorithms for spectral norm error [DKS17]); in contrast, O(d) samples suffice for (non-robust) PCA. Computationally, even when the underlying distrubution is exactly Gaussian, the best-known covariance estimation algorithms run in time Ω(d3.25); algorithms working in more general settings based on the sum-of-squares approach require much more time. In contrast, the power method for PCA in a d× d matrix takes time Õ(d2)3. Motivated by this, our work initiates the direct study of robust PCA, which is often independently interesting in applications. We remark there is another problem termed “robust PCA” in the literature, e.g. [CLMW11], under a different generative model. We defer a detailed discussion to [DKK+17], which experimentally shows that algorithms from that line of work do not transfer well to our corruption model. Width-independent iterative methods. Semidefinite programming (SDP) and its linear programming specialization are fundamental computational tasks, with myriad applications in learning, operations research, and computer science. Though general-purpose polynomial time algorithms exist for 3We say g = Õ(f) if g = O(f logc f) for some constant c > 0. SDPs ([NN94]), in practical settings in high dimensions, approximations depending linearly on input size and polynomially on error are sometimes desirable. To this end, approximation algorithms based on entropic mirror descent have been intensely studied [WK06, AK16, GHM15, AL17, CDST19], obtaining additive approximations to the objective with runtimes depending polynomially on ρ/ , where ρ is the “width”, the largest spectral norm of a constraint. For structured SDPs, stronger guarantees can be obtained in terms of width. Specifically, several algorithms developed for packing SDPs ((1) with p =∞) yield (1+ )-multiplicative approximations to the objective, with logarithmic dependence on width [JY11, PTZ16, ALO16, JLL+20]. As ρ upper bounds objective value in this setting, in the worst case runtimes of width-dependent solvers yielding ρ-additive approximations have similar dependences as width-independent counterparts. Widthindependent solvers simultaneously yield stronger multiplicative bounds at all scales of objective value, making them desirable in suitable applications. In particular, `∞ packing SDPs have found great utility in robust statistics algorithm design [CG18, CDG19, CDGW19, DL19]. Beyond `∞ packing, width-independent guarantees in the SDP literature are few and far between; to our knowledge, other than the covering and mixed solvers of [JLL+20], ours is the first such guarantee for a broader family of objectives4. Our method complements analogous `p extensions in the width-dependent setting, e.g. [ALO15], as well as width-independent solvers for `p packing linear programs [MSZ16, DFO18]. We highlight the fair packing solvers of [MSZ16, DFO18], motivated by problems in equitable resource allocation, which further solved `p packing variants for p 6∈ [1,∞). We find analogous problems in semidefinite settings interesting, and defer to future work. Concurrent work. Concurrent work by Kong et al. [KSKO20] also develops a PCA algorithm tolerant to a bounded fraction of adversarial corruption. Their method is similar to our algorithm based on soft downweighting (Algorithm 6), is analyzed under a fourth moment bound assumption (as opposed to sub-Gaussianity as in this paper), and also generalizes to top-k eigenvector estimation. To our knowledge, our fast algorithm (Algorithm 4) is the first in the literature which robustly solves the 1-PCA problem in near-linear time (for gapped covariances), at the cost of weaker error bounds. 1.2 Our results Robust sub-Gaussian principal component analysis. We give two algorithms for robust subGaussian PCA5. Both are sample optimal, polynomial-time, and assume only sub-Gaussianity. The first is via a simple filtering approach, as summarized in the following (and developed in Section 3). Theorem 1. Under Assumption 1, let δ ∈ [0, 1], and n = Ω ( d+log δ−1 ( log −1)2 ) . Algorithm 6 runs in time O(nd 2 log n δ log n δ ), and outputs u with u >Σu > (1− C? log −1)‖Σ‖∞, for C? a fixed multiple of parameter c in Assumption 1, with probability at least 1− δ. Our second algorithm is more efficient under mild conditions, but yields a worse approximation 1− γ for γ = O( √ log −1 log d). Specifically, if there are few eigenvalues of Σ larger than 1− γ, our algorithm runs in nearly-linear time. Note that if there are many eigenvalues above this threshold, then the PCA problem itself is not very well-posed; our algorithm is very efficient in the interesting setting where the approximate top eigenvector is identifiable. We state our main algorithmic guarantee here, and defer details to Section 5. Theorem 2. Under Assumption 1, let δ ∈ [0, 1], n = Ω ( d+log δ−1 ( log −1)2 ) , γ = C √ log −1 log d, for C a fixed multiple of parameter c from Assumption 1, and let t ∈ [d] satisfy Σt+1 < (1− γ) ‖Σ‖∞. Algorithm 4 outputs a unit vector u ∈ Rd with u>Σu ≥ (1− γ)‖Σ‖∞ in time Õ( nd 4.5 + ndt 1.5 ). Since Ω(d −2) samples are necessary for a (1− )-approximation to the top eigenvector of Σ via uncorrupted samples, our first method is sample-optimal, as is our second up to a Õ( −1) factor. Width-independent Schatten packing. Our second method crucially requires an efficient solver for Schatten packing SDPs. We demonstrate that Schatten packing, i.e. (1) for arbitrary p, admits width-independent solvers. We state an informal guarantee, and defer details to Section 4. 4In concurrent and independent work, [CMY20] develops width-independent solvers for Ky-Fan packing objectives, a different notion of generalization than the Schatten packing objectives we consider. 5We follow the distribution and corruption model described in Assumption 1. Theorem 3. Let {Ai}i∈[n] ∈ Sd≥0, and > 0. There is an algorithm taking O( p log(nd ) ) iterations, returning a 1 + multiplicative approximation to the problem (1). For odd p, each iteration can be implemented in time nearly-linear in the number of nonzeros amongst all {Ai}i∈[n]. 2 Preliminaries General notation. [n] denotes the set 1 ≤ i ≤ n. The operation ◦ applied to two vectors of equal dimension is their entrywise product. Applied to a vector, ‖·‖p is the `p norm; applied to a symmetric matrix, ‖·‖p is the Schatten-p norm, i.e. the `p norm of the spectrum. The dual norm of `p is `q for q = pp−1 ; when p = ∞, q = 1. ∆ n is the n-dimensional simplex (subset of positive orthant with `1-norm 1) and we define Snε ⊆ ∆n to be the truncated simplex: Snε := { w ∈ Rn≥0 ∣∣∣∣∣ ‖w‖1 = 1, w ≤ 1n(1− ε) entrywise } . (2) Matrices. Sd is d × d symmetric matrices, and Sd≥0 is the positive semidefinite subset. I is the identity of appropriate dimension. λmax, λmin, and Tr are the largest and smallest eigenvalues and trace of a symmetric matrix. For M,N ∈ Sd, 〈M,N〉 := Tr (MN) and we use the Loewner order , (M N iff N−M ∈ Sd≥0). The seminorm of M 0 is ‖v‖M := √ v>Mv. Fact 1. For A, B with compatible dimension, Tr(AB) = Tr(BA). For M,N ∈ Sd≥0, 〈M,N〉 ≥ 0. Fact 2. We have the following characterization of the Schatten-p norm: for M ∈ Sd, and q = pp−1 , ‖M‖p = sup N∈Sd, ‖N‖q=1 〈N,M〉 . For M = ∑ j∈[d] λiviv > i , the satisfying N is ∑ j∈[d]±λ p−1 i viv > i ‖M‖p−1p , so NM has spectrum |λ| p ‖M‖p−1p . Distributions. We denote drawing vector X from distribution D by X ∼ D, and the covariance Σ of D is EX∼D [ XX> ] . We say scalar distribution D is γ2-sub-Gaussian if EX∼D[X] = 0 and EX∼D [exp (tX)] ≤ exp ( t2γ2 2 ) ∀t ∈ R. Multivariate D has sub-Gaussian proxy Γ if its restriction to any unit v is ‖v‖2Γ-sub-Gaussian, i.e. EX∼D [ exp ( tX>v )] ≤ exp ( t2 ‖v‖2Γ 2 ) for all ‖v‖2 = 1, t ∈ R. (3) We consider the following standard model for gross corruption with respect to a distribution D. Assumption 1 (Corruption model, see [DKK+19]). Let D be a mean-zero distribution on Rd with covariance Σ and sub-Gaussian proxy Γ cΣ for a constant c. Denote by index set G′ with |G′| = n a set of (uncorrupted) samples {Xi}i∈G′ ∼ D. An adversary arbitrarily replaces n points in G′; we denote the new index set by [n] = B ∪G, where B is the (unknown) set of points added by an adversary, and G ⊆ G′ is the set of points from G′ that were not changed. As we only estimate covariance properties, the assumption that D is mean-zero only loses constants in problem parameters, by pairing samples and subtracting them (cf. [DKK+19], Section 4.5.1). 3 Robust sub-Gaussian PCA via filtering In this section, we sketch the proof of Theorem 1, which gives guarantees on our filtering algorithm for robust sub-Gaussian PCA. This algorithm obtains stronger statistical guarantees than Theorem 2, at the cost of super-linear runtime; the algorithm is given as Algorithm 6. Our analysis stems largely from concentration facts about sub-Gaussian distributions, as well as the following (folklore) fact regarding estimation of variance along any particular direction. Lemma 1. Under Assumption 1, let δ ∈ [0, 1], n = Ω ( log δ−1 ( log −1)2 ) , and u ∈ Rd be a fixed unit vector. Algorithm 5, 1DRobustVariance, takes input {Xi}i∈[n], u, and , and outputs σ2u with |u>Σu−σ2u| < Cu>Σu · log −1 with probability at least 1− δ, and runs in time O(nd+n log n), for C a fixed multiple of the parameter c in Assumption 1. In other words, we show that using corrupted samples, we can efficiently estimate a 1 +O( log −1)- multiplicative approximation of the variance of D in any unit direction6. This proof is deferred to Appendix B for completeness. Algorithm 6 combines this key insight with a soft filtering approach which has found many applications in the recent robust statistics literature, suggested by the following known structural fact found in previous work (e.g. Lemma A.1 of [DHL19], see also [SCV17, Ste18]). Lemma 2. Let {ai}i∈[m], {wi}i∈[m] be sets of nonnegative reals, and amax = maxi∈[m] ai. Define w′i = ( 1− aiamax ) wi, for all i ∈ [m]. Consider any disjoint partition IB , IG of [m] with∑ i∈IB wiai > ∑ i∈IG wiai. Then, ∑ i∈IB wi − w ′ i > 1 2amax ∑ i∈[m] wiai > ∑ i∈IG wi − w ′ i. Our Algorithm 6, PCAFilter, takes as input a set of corrupted samples {Xi}i∈[n] following Assumption 1 and the corruption parameter . At a high level, it initializes a uniform weight vector w(0), and iteratively operates as follows (we denote by M(w) the empirical covariance ∑ i∈[n] wiXiX > i ). 1. ut ← approximate top eigenvector of M(w(t−1)) via power iteration. 2. Compute σ2t ← 1DRobustVariance({Xi}i∈[n], ut, ). 3. If σ2t > (1−O( log −1)) · u>t M(w(t−1))ut, then terminate and return ut. 4. Else: (a) Sort indices i ∈ [n] by ai ← 〈ut, Xi〉2, with a1 smallest. (b) Let ` ≤ i ≤ n be the smallest set for which ∑n i=` wi ≥ 2 , and apply the downweight- ing procedure of Lemma 2 to this subset of indices. The analysis of Algorithm 6 then proceeds in two stages. Monotonicity of downweighting. We show the invariant criteria for Lemma 2 (namely, that for the set ` ≤ i ≤ n in every iteration, there is more spectral mass on bad points than good) holds inductively for our algorithm. Specifically, lack of termination implies M(w(t−1)) puts significant mass on bad directions, which combined with concentration of good directions yields the invariant. The details of this argument can be found as Lemma 11. Roughly uniform weightings imply approximation quality. As Lemma 2 then applies, the procedure always removes more mass from bad points than good, and thus can only remove at most 2 mass total by the corruption model. Thus, the weights w(t) are always roughly uniform (in SnO( )), which by standard concentration facts (see Appendix A) imply the quality of the approximate top eigenvector is good. Moreover, the iteration count is bounded by roughly d because whenever the algorithm does not terminate, enough mass is removed from large spectral directions. Combining with the termination criteria imply that when a vector is returned, it is a close approximation to the top direction of Σ. Details can be found as Lemma 13 and in the proof of Theorem 1. 4 Schatten packing For our second robust PCA algorithm, developed in Section 5, we require a key technical tool which we now develop in this section. The tool, Schatten-norm packing semidefinite programs (and hybrid-norm extensions), is a smoothed generalization of the classical packing semidefinite program, which may be of independent interest in other applications. At a high level, the reason Schatten packing solvers are useful for the robust PCA problem is because while an adversary can fool a PCA algorithm based on operator-norm semidefinite programs by “promoting” a single other eigenvector to have a larger variance, a p-norm-based semidefinite program forces a tradeoff between the number of directions promoted and the amount of variance introduced. 6Corollary 4 gives a slightly stronger guarantee that reusing samples does not break dependencies of u. 4.1 Mirror descent interpretation of [MRWZ16] We begin by reinterpreting the [MRWZ16] solver, which achieves the state-of-the-art parallel runtime for packing LPs7. An (`∞) packing LP algorithm solves the following decision problem.8. Problem 1 (`∞ packing linear program). Given entrywise nonnegative A ∈ Rd×n≥0 , either find primal solution x ∈ ∆n with ‖Ax‖∞ ≤ 1 + or dual solution y ∈ ∆d with A>y ≥ (1− )1. Algorithm 1 PackingLP(A, ) 1: Input: A ∈ Rd×n≥0 , ∈ [0, 1 2 ] 2: K ← 3 log(d) , η ← K −1, T ← 4 log(d) log(nd/ ) 2 3: [w0]i ← n2d for all i ∈ [n], z ← 0, t← 0 4: while Awt ≤ K1, ‖wt‖1 ≤ K do 5: vt ← exp(Awt)‖exp(Awt)‖1 6: gt ← max(0,1−A>vt) entrywise 7: wt+1 ← wt ◦ (1 + ηgt), z ← z + vt, t← t+ 1 8: if t ≥ T then 9: return y ← 1T z 10: end if 11: end while 12: return x← wt‖wt‖1 The following result is shown in [MRWZ16]. Proposition 1. PackingLP (Algorithm 1) solves Problem 1 in O(nnz(A) · log(d) log(nd/ ) 2 ) time. Our interpretation of the analysis of [MRWZ16] combines two ingredients: a potential argument and mirror descent (alternatively known as the “multiplicative weights” framework), which yields a dual feasible point if ‖wt‖1 did not grow sufficiently. Potential argument. The potential used by [MRWZ16] is log( ∑ j∈[d] exp([Awt]j))− ‖wt‖1, wellknown to be a O(log d)-additive approximation of ‖Awt‖∞−‖wt‖1. As soon as ‖Awt‖∞ or ‖wt‖1 reaches the scale O( log d ), by nonnegativity this becomes a multiplicative guarantee, motivating the setting of threshold K. To prove the potential is monotone, [MRWZ16] uses step size K−1 and a Taylor approximation; combining with the termination condition yields the desired claim. Mirror descent. To certify that wt grows sufficiently (e.g. the method terminates in few iterations, else dual feasibility holds), we interpret the step wt+1 ← wt ◦ (1 + ηgt) as approximate entropic mirror descent. Specifically, we track the quantity ∑ 0≤t<T 〈ηgt, u〉, and show that if ‖wt‖1 has not grown sufficiently, then it must be bounded for every u ∈ ∆n, certifying dual feasibility. Formally, for any gt sequence and u ∈ ∆n, we show O(log(nd/ )) + log ( ‖wT ‖1 ‖w0‖1 ) ≥ ∑ 0≤t<T 〈ηgt, u〉 ≥ η ∑ 0≤t<T 〈 1−A>vt, u 〉 . The last inequality followed by gt being an upwards truncation. If ‖wT ‖1 is bounded (else, we have primal feasibility), we show the entire above expression is bounded O(log nd ) for any u. Thus, by setting T = O( log(nd/ )η ) and choosing u to be each coordinate indicator, it follows that the average of all vt is coordinatewise at least 1− , and solves Problem 1 as a dual solution. Our gt is the (truncated) gradient of the function used in the potential analysis, so its form allows us to interpret dual feasibility (e.g. vt has `1 norm 1 and is a valid dual point). Our analysis patterns standard mirror descent, complemented by side information which says that lack of a primal solution can transform a regret guarantee into a feasibility bound. We apply this framework to analyze `p 7The [MRWZ16] solver also generalizes to covering and mixed objectives; we focus on packing in this work. 8Packing linear programs are sometimes expressed as the optimization problem maxx≥0,Ax≤1 ‖x‖1, simi- larly to (1); these problems are equivalent up to a standard binary search, see e.g. discussion in [JLL+20]. variants of Problem 1, via different potentials; our proofs are quite straightforward upon adopting this perspective, and we believe it may yield new insights for instances with positivity structure. 4.2 `p-norm packing linear programs In this section, we give an example of the framework proposed in Section 4.1, for approximately solving `p norm packing linear programs. Specifically, we now consider the generalization of Problem 1 to `p norms; throughout, q = pp−1 is the dual norm. Problem 2 (`p packing linear program). Given entrywise nonnegative A ∈ Rd×n≥0 , either find primal solution x ∈ ∆n with ‖Ax‖p ≤ 1 + or dual solution y ∈ Rd≥0, ‖y‖q = 1 with A>y ≥ (1− )1. For p = log d , Problem 2 recovers Problem 1 up to constants as `p multiplicatively approximates `∞ by 1 + . We now state our method for solving Problem 2 as Algorithm 2. Algorithm 2 PNormPacking(A, , p) 1: Input: A ∈ Rd×n≥0 , ∈ [0, 1 2 ], p ≥ 2 2: η ← p−1, T ← 4p log( nd ) 3: [w0]i ← n2d for all i ∈ [n], z ← 0, t← 0 4: while ‖wt‖1 ≤ −1 do 5: gt ← max(0,1−A>(vt)p−1) entrywise, for vt ← Awt‖Awt‖p 6: wt+1 ← wt ◦ (1 + ηgt), z ← z + (vt)p−1, t← t+ 1 7: if t ≥ T then 8: return y = z‖z‖q 9: end if 10: end while 11: return x = wt‖wt‖1 Other than changing parameters, the only difference from Algorithm 1 is that v is a point with unit `q norm induced by the gradient of our potential Φt. We state our main potential fact, whose proof is based straightforwardly on Taylor expanding ‖·‖p, and deferred to Appendix C for brevity. Lemma 3. In all iterations t of Algorithm 2, defining Φt := ‖Awt‖p − ‖wt‖1, Φt+1 ≤ Φt. We now state our main result, which leverages the potential bound following the framework of Section 4.1. A proof can be found in Appendix C. Theorem 4. Algorithm 2 runs in time O(nnz(A) · p log(nd/ ) ). Further, its output solves Problem 2. 4.3 Schatten-norm packing semidefinite programs We generalize Algorithm 2 to solve Schatten packing semidefinite programs, which we now define. Problem 3. Given {Ai}i∈[n] ∈ Sd≥0, either find primal solution x ∈ ∆n with ∥∥∥∑i∈[n] xiAi∥∥∥ p ≤ 1 + or dual solution Y ∈ Sd≥0, ‖Y‖q = 1 with 〈Ai,Y〉 ≥ 1− for all i ∈ [n]. We assume that p is an odd integer for simplicity (sufficient for our applications), and leave for interesting future work the cases when p is even or noninteger. The potential used in the analysis and an overall guarantee are stated here, and deferred to Appendix C. The proofs are simple modifications of Lemma 3 and Theorem 4 using trace inequalities (similar to those in [JLL+20]) in place of scalar inequalities, as well as efficient approximation of quantities in Line 5 via the standard technique of Johnson-Lindestrauss projections. Lemma 4. In all iterations t of Algorithm 3, defining Φt := ∥∥∥∑i∈[n][wt]iAi∥∥∥ p −‖wt‖1, Φt+1 ≤ Φt. Algorithm 3 SchattenPacking({Ai}i∈[n], , p) 1: Input: {Ai}i∈[n] ∈ Sd≥0, ∈ [0, 12 ], p ≥ 2 2: η ← p−1, T ← 4p log( nd ) 3: [w0]i ← n2d for all i ∈ [n], z ← 0 4: while ‖wt‖1 ≤ −1 do 5: gt ← max ( 0,1− 〈 Ai,V p−1 t 〉) entrywise, for Vt ← ∑ i∈[n][wt]iAi ‖∑i∈[n][wt]iAi‖p 6: wt+1 ← wt ◦ (1 + ηgt), Z← Z + (Vt)p−1, t← t+ 1 7: if t ≥ T then 8: return Y = Z‖Z‖q 9: end if 10: end while 11: return x = wt‖wt‖1 Theorem 5. Let p be odd. Algorithm 3 runs in O(p log(nd/ ) ) iterations, and its output solves Problem 3. Each iteration is implementable in O(nnz · p log(nd/ ) 2 ), where nnz is the number of nonzero entries amongst all {Ai}i∈[n], losing O( ) in the quality of Problem 3 with probability 1− poly((nd/ )−1). 4.4 Schatten packing with a `∞ constraint We remark that the framework outlined in Section 4.1 is flexible enough to handle mixed-norm packing problems. Specifically, developments in Section 5 require the following guarantee. Proposition 2. Following Theorem 5’s notation, let p be odd, {Ai}i∈[n] ∈ Sd≥0, 0 < = O(α), and min x∈∆n ‖x‖∞≤ 1+α n ‖A(x)‖p = OPT. (4) for A(x) := ∑ i∈[n] xiAi. Given estimate of OPT exponentially bounded in nd , there is a procedure calling Algorithm 7 O(log nd ) times giving x ∈ ∆ n with ‖x‖∞ ≤ (1+α)(1+ ) n , ‖A(x)‖p ≤ (1 + )OPT. Algorithm 7 runs in O( log(nd/ ) logn 2 ) iterations, each requiring time O(nnz · p log(nd/ ) 2 ). Our method, found in Appendix C, approximately solves (4) by first applying a standard binary search to placeA(x) on the right scale, for which it suffices to solve an approximate decision problem. Then, we apply a truncated mirror descent procedure on the potential Φ(w) = log(exp(‖A(w)‖p) + exp( n1+α ‖w‖∞)) − ‖w‖1, and prove correctness for solving the decision problem following the framework we outlined in Section 4.1. 5 Robust sub-Gaussian PCA in nearly-linear time We give our nearly-linear time robust PCA method, leveraging developments of Section 4. Throughout, we will be operating under Assumption 1, for some corruption parameter with log −1 log d = O(1); = O( 1log d log log d ) suffices. We now develop tools to prove Theorem 2. Algorithm 4 uses three subroutines: our earlier 1DRobustVariance method (Lemma 1), an application of our earlier Proposition 2 to approximate the solution to min w∈Sn ∥∥∥∥∥∥ ∑ i∈[n] wiXiX > i ∥∥∥∥∥∥ p , for p = Θ (√ log d log −1 ) , (5) and a method for computing approximate eigenvectors by [MM15] (discussed in Appendix D). Proposition 3. There is an algorithm Power (Algorithm 1, [MM15]), parameterized by t ∈ [d], tolerance ̃ > 0, p ≥ 1, and A ∈ Sd≥0, which outputs orthonormal {zj}j∈[t] with the guarantee∣∣z>j Apzj − λpj (A)∣∣ ≤ ̃λpj (A)∣∣∣z>j Ap−1zj − λp−1j (A)∣∣∣ ≤ ̃λp−1j (A) for all j ∈ [t]. (6) Here, λj(A) is the jth largest eigenvalue of A. The total time required by the method is O(nnz(A) tp log dε ). Algorithm 4 RobustPCA({Xi}i∈[n], , t) 1: Input: {Xi}i∈[n] = O( 1log d log log d ), t ∈ [d] with Σt+1 ≤ (1− γ)Σ for γ in Theorem 2 2: w ← BoxedSchattenPacking (Proposition 2) on {Ai = XiX>i }i∈[n], α← , p as in (5) 3: M = ∑ i∈[n] wiXiX > i 4: {zj}j∈[t] = Power(t, , p,M) 5: αj ← 1DRobustVariance({Xi}i∈[n],M p−1 2 zj/‖M p−1 2 zj‖2, ) for all j ∈ [t] 6: return zj∗ for j∗ = argmaxj∈[t]αj Algorithm 4 is computationally bottlenecked by the application of Proposition 2 on Line 2 and the t calls to 1DRobustVariance on Line 5, from which the runtime guarantee of Theorem 2 follows straightforwardly. To demonstrate correctness, we first certify the quality of the solution to (5). Lemma 5. Let n = Ω ( d+log δ−1 ( log −1)2 ) . With probability 1− δ2 , the uniform distribution over G attains value (1 + ̃2 )‖Σ‖p for objective (5), where ̃ = C ′ log −1 for a universal constant C ′ > 0. The proof of this is similar to results in e.g. [DKK+19, Li18], and combines concentration guarantees with a union bound over all possible corruption sets B. This implies the following immediately, upon applying the guarantees of Proposition 2. Corollary 1. Let w be the output of Line 2 of RobustPCA. Then, we have ‖w‖∞ ≤ 1 (1−2 )n , and∥∥∥∑i∈[n] wiXiX>i ∥∥∥ p ≤ (1 + ̃) ‖Σ‖p under the guarantee of Lemma 5. Let w be the output of the solver. Recall that M = ∑n i=1 wiXiX > i . Additionally, define MG := ∑ i∈G wiXiX > i , wG := ∑ i∈G wi, MB := ∑ i∈B wiXiX > i , wB := ∑ i∈G wi . (7) Notice in particular that M = MG + MB , and that all these matrices are PSD. We next prove the second, crucial fact, which says that MG is a good approximator to Σ in Loewner ordering: Lemma 6. Let n = Ω ( d+log δ−1 ( log −1)2 ) . With probability at least 1− δ2 , (1 + ̃)Σ MG (1− ̃)Σ. The proof combines the strategy in Lemma 5 with the SDP solver guarantee. Perhaps surprisingly, Corollary 1 and Lemma 6 are the only two properties about M that our final analysis of Theorem 2 will need. In particular, we have the following key geometric proposition, which carefully combines trace inequalities to argue that the corrupted points cannot create too many new large eigendirections. Proposition 4. Let M = MG + MB be so that ‖M‖p ≤ (1 + ̃) ‖Σ‖p, MG 0 and MB 0, and so that (1 + ̃)Σ MG (1− ̃)Σ. Following notation of Algorithm 4, let M = ∑ j∈[d] λjvjv > j , Σ = ∑ j∈[d] σjuju > j (8) be sorted eigendecompositions of M and Σ, so λ1 ≥ . . . ≥ λd, and σ1 ≥ . . . ≥ σd. Let γ be as in Theorem 2, and assume σt+1 < (1− γ)σ1. Then, max j∈[t] v>j Σvj ≥ (1− γ) ‖Σ‖∞ . With Proposition 4 in place, the recovery bound of Theorem 2 follows from an exact SVD. We show in Appendix D that the method is robust to approximations of the form (6), yielding our final claim. Broader Impact Our work provides frameworks for learning properties about the covariance of sub-Gaussian distributions which have been corrupted under noise. As a key subroutine, we develop solvers for smoothed positive linear and semidefinite programs. We believe these results are interesting from an academic perspective, e.g. our techniques may be applicable generally for robust statistics and convex optimization researchers. Moreover, because our primary results concern robustness of models to arbitrarily corrupted data, we believe our methods may have practical implications for downstream tasks where protection against a malicious adversary is warranted. Similarly, as our main subroutine is a solver attaining strong computational guarantees for a wider variety of objectives than was previously known, it is possible that our methods can be leveraged to broaden the types of downstream tasks that can be performed. Namely, as `p norm packing linear program solvers have found applications in fair resource allocation, our hope is that our smoothed and mixed-norm guarantee semidefinite solvers can find similar applications in learning algorithms for objectives designed with fairness or privacy in mind.
1. What is the main contribution of the paper regarding Robust PCA? 2. What are the strengths of the proposed approach, particularly in terms of sample complexity and runtime efficiency? 3. Are there any concerns or limitations regarding the practical applicability of the algorithm? 4. How does the reviewer assess the novelty and technical depth of the paper's content? 5. What are the potential implications of the paper's results for future research in robust statistics and machine learning?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies the problem of Robust PCA of a subgaussian distribution. Specifically, one is given samples X_1,X_2,...,X_n from a subgaussian distribution, such that an eps-fraction of the samples have been arbitrarily corruption (modified by an adversary), the goal is to approximately recover the top singular vector of the covariance matrix Sigma. Here, approximately recover means to find a vector u such that u^T Sigma u > (1 - gamma )|Sigma|_op, where |Sigma|_op is the operator norm of Sigma. The main result of this paper are runtime and sample complexity efficient algorithms for this task. Specifically, they show 1) An algorithm that achieves error gamma = O(eps log(1/eps)) in polynomial time, specifically in tilde{O}(n d^2/eps) time, using n > Omega(d /*(eps log(1/eps))^2) samples. 2) Under the assumption that the covariance matrix has an spectral gap between its first and second largest singular values, an algorithm that achieves slightly worse error gamma = O(\sqrt{eps log(1/eps) \log d }) in "nearly-linear" time using the same number of samples. Specifically in tilde{O}(n d/eps^{4.5}) time, using n > Omega(d /*(eps log(1/eps))^2) samples. Prior to this work, the problem of robust covariance estimation and robust PCA were very well studied, however most results (such as those using SoS-based techniques) required more structure on the moments of the distributions than a purely sub-Gaussian guarantee. This is the first such result on robust PCA in the eps-contamination model which gives polynomial time guarantees for general subgaussian distributions. Most prior works obtained guarantees for PCA by first robustly estimating the covariance matrix, which the authors point out is a potentially lossy step. Instead, the authors circumvent the step of covariance estimation, and instead directly estimate the top singular vector. This allows the authors to obtain improved sample complexity (linear in d, as opposed to the d^2 required by known algorithms for learning covariance matrices to spectral norm error). To obtain the result in 2), the authors need to solve the intermediate problem of width-independent Schatten packing. Here, the Schatten packing SDP is to solve min_w |sum_i w_i A_i|_p subject to w \in Delta^n Where Delta^n is the subset of probability distributions (w_1,...,w_n) over n points, A_i's are fixed, and | |_p is the Schatten p-norm. This problem can be solved with additive error depending linearly on the "width" of the problem by an SDP solver. Here, the width rho is the largest spectral norm of a constraint. However, obtaining bounds independent of rho (or logarithmic in rho) were only known for the special case of p=infty (which is the spectral norm). The authors solve this intermediate problem, by providing an algorithm which gives a (1+eps) multiplcative approximation of the objective value with O(p log(nd/eps)/eps) iterations. Thus, the number of iterations required by the solver to obtain relative error is independent of the width of the program. Moreover, for odd p, each iteration can be carried out in linear time. The algorithm proceeds by assigning weights w to samples, initialized to be uniform. One can then compute the top eigenvector u of the empirical covariance with respect to the samples X_1,...,X_n and the weights w_1,...,w_n. However, in the eps-contaminated model, it is easy to see that in general u may not yield a good approximation for u^T Sigma u. Firstly, using a standard procedure, the one can robustly estimate the size of u^T Sigma u from the given samples in the fixed direction u. If this contribution is large enough, then u is a good target direction. Otherwise, they proceed by reweighting the distribution w so as to down-weight potentially corrupted samples. They then compute the new top eigenvector with respect to the new weights, and iteratively continue from there. For their nearly linear time algorithm, to implement the second step of down-weighing the samples efficiently, the authors utilize their width-independent Schatten packing algorithm. %%%%% Post-Rebuttal %%%% I appreciate the author's replies, and continue to think that this is a very strong paper which I believe is well above the threshold to be accepted to NeurIPS. With regards to the response, I would clarify that I am not suggesting that a experimental evaluation should be necessary to add to this paper, just that a discussion on its implementation (that it can be parallelized, ect.) should be highlighted slightly more given the venue. Strengths This is a strong result, which should be particularly appealing to the NeurIPS community due to the simplicity of the assumption: only sub-Gaussianity is required (as opposed to more complicated algebraic assumptions on the moments of the distribution, which may not be as easy to justify in practice). The paper employs a mixture of known as well as novel techniques; the approach of re-weighing samples iteratively via an SDP is particularly interesting, and may be inspiring for future algorithms in robust statistics. Improved solvers for SDP packing under different norms (e.g. Schatten and Ky-fan) have emerged as interesting and important problems for a variety of statistical problems. Thus, the Schatten packing solver, while used mainly as an intermediate step for the purposes of the near-linear time algorithm, is indeed interesting as an independent result, and likely useful for other applications. Weaknesses While theoretically interesting and technically novel, it's unclear whether the new SDP solver has much practical relevance. Thus, while a key selling point of this paper to the NeurIPS community may be its generality (only requiring sub-gaussian moments), the practical applicability of the algorithm itself is somewhat lacking.
NIPS
Title Robust Sub-Gaussian Principal Component Analysis and Width-Independent Schatten Packing Abstract We develop two methods for the following fundamental statistical task: given an -corrupted set of n samples from a d-dimensional sub-Gaussian distribution, return an approximate top eigenvector of the covariance matrix. Our first robust PCA algorithm runs in polynomial time, returns a 1−O( log −1)-approximate top eigenvector, and is based on a simple iterative filtering approach. Our second, which attains a slightly worse approximation factor, runs in nearly-linear time and sample complexity under a mild spectral gap assumption. These are the first polynomial-time algorithms yielding non-trivial information about the covariance of a corrupted sub-Gaussian distribution without requiring additional algebraic structure of moments. As a key technical tool, we develop the first widthindependent solvers for Schatten-p norm packing semidefinite programs, giving a (1 + )-approximate solution in O(p log( ) −1) input-sparsity time iterations (where n, d are problem dimensions). N/A −1) input-sparsity time iterations (where n, d are problem dimensions). 1 Introduction We study two natural, but seemingly unrelated, problems in high dimensional robust statistics and continuous optimization respectively. As we will see, these problems have an intimate connection. Problem 1: Robust sub-Gaussian principal component analysis. We consider the following statistical task, which we call robust sub-Gaussian principal component analysis (PCA). Given samples X1, . . . , Xn from sub-Gaussian1 distribution D with covariance Σ, an fraction of which are arbitrarily corrupted, the task asks to output unit vector u with u>Σu ≥ (1 − γ) ‖Σ‖∞2 for tolerance γ. Ergo, the goal is to robustly return a (1 − γ)-approximate top eigenvector of the covariance of sub-Gaussian D. This is the natural extension of PCA to the robust statistics setting. There has been a flurry of recent work on efficient algorithms for robust statistical tasks, e.g. covariance estimation and PCA. From an information-theoretic perspective, sub-Gaussian concentration suffices for robust covariance estimation. Nonetheless, to date all polynomial-time algorithms achieving nontrivial guarantees on covariance estimation (including PCA specifically) in the presence of adversarial noise require additional algebraic structure. For instance, sum-of-squares certifiably bounded moments have been leveraged in polynomial time covariance estimation algorithms [HL18, KSS18]; however, this is a stronger assumption than sub-Gaussianity. In many applications (see discussion in [DKK+17]), the end goal of covariance estimation is PCA. Thus, a natural question which relaxes robust covariance estimation is: can we robustly estimate the top eigenvector of the covariance Σ, assuming only sub-Gaussian concentration? Our work answers this question affirmatively via two incomparable algorithms. The first achieves γ = O( log −1) in 1See Section 2 for a formal definition. 2Throughout we use ‖M‖p to denote the Schatten p-norm (cf. Section 2 for more details). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. polynomial time; the second achieves γ = O( √ log −1 log d), in nearly-linear time under a mild gap assumption on Σ. Moreover, both methods have nearly-optimal sample complexity. Problem 2: Width-independent Schatten packing. We consider a natural generalization of packing semidefinite programs (SDPs) which we call Schatten packing. Given symmetric positive semidefinite A1, . . . ,An and parameter p ≥ 1, a Schatten packing SDP asks to solve the optimization problem min ∥∥∥∥∥∥ ∑ i∈[n] wiAi ∥∥∥∥∥∥ p subject to w ∈ ∆n. (1) Here, ‖M‖p is the Schatten-p norm of matrix M and ∆n is the probability simplex (see Section 2). When p = ∞, (1) is the well-studied (standard) packing SDP objective [JY11, ALO16, PTZ16], which asks to find the most spectrally bounded convex combination of packing matrices. For smaller p, the objective encourages combinations more (spectrally) uniformly distributed over directions. The specialization of (1) to diagonal matrices is a smooth generalization of packing linear programs, previously studied in the context of fair resource allocation [MSZ16, DFO18]. For the `∞ case of (1), packing SDPs have the desirable property of admitting “width-independent” approximation algorithms via exploiting positivity structure. Specifically, width-independent solvers obtain multiplicative approximations with runtimes independent or logarithmically dependent on size parameters of the problem. This is a strengthening of additive notions of approximation typically used for approximate semidefinite programming. Our work gives the first width-independent solver for Schatten packing. 1.1 Previous work Learning with adversarial outliers. The study of estimators robust to a small fraction of adversarial outliers dates back to foundational work, e.g. [Hub64, Tuk75]. Following more recent work [LRV16, DKK+19], there has been significant interest in efficient, robust algorithms for statistical tasks in high-dimensional settings. We focus on methods robustly estimating covariance properties here, and defer a thorough discussion of the (extensive) robust statistics literature to [Ste18, Li18, DK19]. There has been quite a bit of work in understanding and giving guarantees for robust covariance estimation where the uncorrupted distribution is exactly Gaussian [DKK+17, DKK+18, DKK+19, CDGW19]. These algorithms strongly use relationships between higher-order moments of Gaussian distributions via Isserlis’ theorem. Departing from the Gaussian setting, work of [LRV16] showed that if the distribution is an affine transformation of a 4-wise independent distribution, robust covariance estimation is possible. This was extended by [KSS18], which also assumed nontrivial structure in the moments of the distribution, namely that sub-Gaussianity was certifiable via the sum-of-squares proof system. To the best of our knowledge it has remained open to give nontrivial guarantees for robust estimation of any covariance properties under minimal assumptions, i.e. sub-Gaussian concentration. All aforementioned algorithms also yield guarantees for robust PCA, by applying a top eigenvector method to the learned covariance. However, performing robust PCA via the intermediate covariance estimation step is lossy, both statistically and computationally. From a statistical perspective, Ω(d2) samples are necessary to learn the covariance of a d-dimensional Gaussian in Frobenius norm (and for known efficient algorithms for spectral norm error [DKS17]); in contrast, O(d) samples suffice for (non-robust) PCA. Computationally, even when the underlying distrubution is exactly Gaussian, the best-known covariance estimation algorithms run in time Ω(d3.25); algorithms working in more general settings based on the sum-of-squares approach require much more time. In contrast, the power method for PCA in a d× d matrix takes time Õ(d2)3. Motivated by this, our work initiates the direct study of robust PCA, which is often independently interesting in applications. We remark there is another problem termed “robust PCA” in the literature, e.g. [CLMW11], under a different generative model. We defer a detailed discussion to [DKK+17], which experimentally shows that algorithms from that line of work do not transfer well to our corruption model. Width-independent iterative methods. Semidefinite programming (SDP) and its linear programming specialization are fundamental computational tasks, with myriad applications in learning, operations research, and computer science. Though general-purpose polynomial time algorithms exist for 3We say g = Õ(f) if g = O(f logc f) for some constant c > 0. SDPs ([NN94]), in practical settings in high dimensions, approximations depending linearly on input size and polynomially on error are sometimes desirable. To this end, approximation algorithms based on entropic mirror descent have been intensely studied [WK06, AK16, GHM15, AL17, CDST19], obtaining additive approximations to the objective with runtimes depending polynomially on ρ/ , where ρ is the “width”, the largest spectral norm of a constraint. For structured SDPs, stronger guarantees can be obtained in terms of width. Specifically, several algorithms developed for packing SDPs ((1) with p =∞) yield (1+ )-multiplicative approximations to the objective, with logarithmic dependence on width [JY11, PTZ16, ALO16, JLL+20]. As ρ upper bounds objective value in this setting, in the worst case runtimes of width-dependent solvers yielding ρ-additive approximations have similar dependences as width-independent counterparts. Widthindependent solvers simultaneously yield stronger multiplicative bounds at all scales of objective value, making them desirable in suitable applications. In particular, `∞ packing SDPs have found great utility in robust statistics algorithm design [CG18, CDG19, CDGW19, DL19]. Beyond `∞ packing, width-independent guarantees in the SDP literature are few and far between; to our knowledge, other than the covering and mixed solvers of [JLL+20], ours is the first such guarantee for a broader family of objectives4. Our method complements analogous `p extensions in the width-dependent setting, e.g. [ALO15], as well as width-independent solvers for `p packing linear programs [MSZ16, DFO18]. We highlight the fair packing solvers of [MSZ16, DFO18], motivated by problems in equitable resource allocation, which further solved `p packing variants for p 6∈ [1,∞). We find analogous problems in semidefinite settings interesting, and defer to future work. Concurrent work. Concurrent work by Kong et al. [KSKO20] also develops a PCA algorithm tolerant to a bounded fraction of adversarial corruption. Their method is similar to our algorithm based on soft downweighting (Algorithm 6), is analyzed under a fourth moment bound assumption (as opposed to sub-Gaussianity as in this paper), and also generalizes to top-k eigenvector estimation. To our knowledge, our fast algorithm (Algorithm 4) is the first in the literature which robustly solves the 1-PCA problem in near-linear time (for gapped covariances), at the cost of weaker error bounds. 1.2 Our results Robust sub-Gaussian principal component analysis. We give two algorithms for robust subGaussian PCA5. Both are sample optimal, polynomial-time, and assume only sub-Gaussianity. The first is via a simple filtering approach, as summarized in the following (and developed in Section 3). Theorem 1. Under Assumption 1, let δ ∈ [0, 1], and n = Ω ( d+log δ−1 ( log −1)2 ) . Algorithm 6 runs in time O(nd 2 log n δ log n δ ), and outputs u with u >Σu > (1− C? log −1)‖Σ‖∞, for C? a fixed multiple of parameter c in Assumption 1, with probability at least 1− δ. Our second algorithm is more efficient under mild conditions, but yields a worse approximation 1− γ for γ = O( √ log −1 log d). Specifically, if there are few eigenvalues of Σ larger than 1− γ, our algorithm runs in nearly-linear time. Note that if there are many eigenvalues above this threshold, then the PCA problem itself is not very well-posed; our algorithm is very efficient in the interesting setting where the approximate top eigenvector is identifiable. We state our main algorithmic guarantee here, and defer details to Section 5. Theorem 2. Under Assumption 1, let δ ∈ [0, 1], n = Ω ( d+log δ−1 ( log −1)2 ) , γ = C √ log −1 log d, for C a fixed multiple of parameter c from Assumption 1, and let t ∈ [d] satisfy Σt+1 < (1− γ) ‖Σ‖∞. Algorithm 4 outputs a unit vector u ∈ Rd with u>Σu ≥ (1− γ)‖Σ‖∞ in time Õ( nd 4.5 + ndt 1.5 ). Since Ω(d −2) samples are necessary for a (1− )-approximation to the top eigenvector of Σ via uncorrupted samples, our first method is sample-optimal, as is our second up to a Õ( −1) factor. Width-independent Schatten packing. Our second method crucially requires an efficient solver for Schatten packing SDPs. We demonstrate that Schatten packing, i.e. (1) for arbitrary p, admits width-independent solvers. We state an informal guarantee, and defer details to Section 4. 4In concurrent and independent work, [CMY20] develops width-independent solvers for Ky-Fan packing objectives, a different notion of generalization than the Schatten packing objectives we consider. 5We follow the distribution and corruption model described in Assumption 1. Theorem 3. Let {Ai}i∈[n] ∈ Sd≥0, and > 0. There is an algorithm taking O( p log(nd ) ) iterations, returning a 1 + multiplicative approximation to the problem (1). For odd p, each iteration can be implemented in time nearly-linear in the number of nonzeros amongst all {Ai}i∈[n]. 2 Preliminaries General notation. [n] denotes the set 1 ≤ i ≤ n. The operation ◦ applied to two vectors of equal dimension is their entrywise product. Applied to a vector, ‖·‖p is the `p norm; applied to a symmetric matrix, ‖·‖p is the Schatten-p norm, i.e. the `p norm of the spectrum. The dual norm of `p is `q for q = pp−1 ; when p = ∞, q = 1. ∆ n is the n-dimensional simplex (subset of positive orthant with `1-norm 1) and we define Snε ⊆ ∆n to be the truncated simplex: Snε := { w ∈ Rn≥0 ∣∣∣∣∣ ‖w‖1 = 1, w ≤ 1n(1− ε) entrywise } . (2) Matrices. Sd is d × d symmetric matrices, and Sd≥0 is the positive semidefinite subset. I is the identity of appropriate dimension. λmax, λmin, and Tr are the largest and smallest eigenvalues and trace of a symmetric matrix. For M,N ∈ Sd, 〈M,N〉 := Tr (MN) and we use the Loewner order , (M N iff N−M ∈ Sd≥0). The seminorm of M 0 is ‖v‖M := √ v>Mv. Fact 1. For A, B with compatible dimension, Tr(AB) = Tr(BA). For M,N ∈ Sd≥0, 〈M,N〉 ≥ 0. Fact 2. We have the following characterization of the Schatten-p norm: for M ∈ Sd, and q = pp−1 , ‖M‖p = sup N∈Sd, ‖N‖q=1 〈N,M〉 . For M = ∑ j∈[d] λiviv > i , the satisfying N is ∑ j∈[d]±λ p−1 i viv > i ‖M‖p−1p , so NM has spectrum |λ| p ‖M‖p−1p . Distributions. We denote drawing vector X from distribution D by X ∼ D, and the covariance Σ of D is EX∼D [ XX> ] . We say scalar distribution D is γ2-sub-Gaussian if EX∼D[X] = 0 and EX∼D [exp (tX)] ≤ exp ( t2γ2 2 ) ∀t ∈ R. Multivariate D has sub-Gaussian proxy Γ if its restriction to any unit v is ‖v‖2Γ-sub-Gaussian, i.e. EX∼D [ exp ( tX>v )] ≤ exp ( t2 ‖v‖2Γ 2 ) for all ‖v‖2 = 1, t ∈ R. (3) We consider the following standard model for gross corruption with respect to a distribution D. Assumption 1 (Corruption model, see [DKK+19]). Let D be a mean-zero distribution on Rd with covariance Σ and sub-Gaussian proxy Γ cΣ for a constant c. Denote by index set G′ with |G′| = n a set of (uncorrupted) samples {Xi}i∈G′ ∼ D. An adversary arbitrarily replaces n points in G′; we denote the new index set by [n] = B ∪G, where B is the (unknown) set of points added by an adversary, and G ⊆ G′ is the set of points from G′ that were not changed. As we only estimate covariance properties, the assumption that D is mean-zero only loses constants in problem parameters, by pairing samples and subtracting them (cf. [DKK+19], Section 4.5.1). 3 Robust sub-Gaussian PCA via filtering In this section, we sketch the proof of Theorem 1, which gives guarantees on our filtering algorithm for robust sub-Gaussian PCA. This algorithm obtains stronger statistical guarantees than Theorem 2, at the cost of super-linear runtime; the algorithm is given as Algorithm 6. Our analysis stems largely from concentration facts about sub-Gaussian distributions, as well as the following (folklore) fact regarding estimation of variance along any particular direction. Lemma 1. Under Assumption 1, let δ ∈ [0, 1], n = Ω ( log δ−1 ( log −1)2 ) , and u ∈ Rd be a fixed unit vector. Algorithm 5, 1DRobustVariance, takes input {Xi}i∈[n], u, and , and outputs σ2u with |u>Σu−σ2u| < Cu>Σu · log −1 with probability at least 1− δ, and runs in time O(nd+n log n), for C a fixed multiple of the parameter c in Assumption 1. In other words, we show that using corrupted samples, we can efficiently estimate a 1 +O( log −1)- multiplicative approximation of the variance of D in any unit direction6. This proof is deferred to Appendix B for completeness. Algorithm 6 combines this key insight with a soft filtering approach which has found many applications in the recent robust statistics literature, suggested by the following known structural fact found in previous work (e.g. Lemma A.1 of [DHL19], see also [SCV17, Ste18]). Lemma 2. Let {ai}i∈[m], {wi}i∈[m] be sets of nonnegative reals, and amax = maxi∈[m] ai. Define w′i = ( 1− aiamax ) wi, for all i ∈ [m]. Consider any disjoint partition IB , IG of [m] with∑ i∈IB wiai > ∑ i∈IG wiai. Then, ∑ i∈IB wi − w ′ i > 1 2amax ∑ i∈[m] wiai > ∑ i∈IG wi − w ′ i. Our Algorithm 6, PCAFilter, takes as input a set of corrupted samples {Xi}i∈[n] following Assumption 1 and the corruption parameter . At a high level, it initializes a uniform weight vector w(0), and iteratively operates as follows (we denote by M(w) the empirical covariance ∑ i∈[n] wiXiX > i ). 1. ut ← approximate top eigenvector of M(w(t−1)) via power iteration. 2. Compute σ2t ← 1DRobustVariance({Xi}i∈[n], ut, ). 3. If σ2t > (1−O( log −1)) · u>t M(w(t−1))ut, then terminate and return ut. 4. Else: (a) Sort indices i ∈ [n] by ai ← 〈ut, Xi〉2, with a1 smallest. (b) Let ` ≤ i ≤ n be the smallest set for which ∑n i=` wi ≥ 2 , and apply the downweight- ing procedure of Lemma 2 to this subset of indices. The analysis of Algorithm 6 then proceeds in two stages. Monotonicity of downweighting. We show the invariant criteria for Lemma 2 (namely, that for the set ` ≤ i ≤ n in every iteration, there is more spectral mass on bad points than good) holds inductively for our algorithm. Specifically, lack of termination implies M(w(t−1)) puts significant mass on bad directions, which combined with concentration of good directions yields the invariant. The details of this argument can be found as Lemma 11. Roughly uniform weightings imply approximation quality. As Lemma 2 then applies, the procedure always removes more mass from bad points than good, and thus can only remove at most 2 mass total by the corruption model. Thus, the weights w(t) are always roughly uniform (in SnO( )), which by standard concentration facts (see Appendix A) imply the quality of the approximate top eigenvector is good. Moreover, the iteration count is bounded by roughly d because whenever the algorithm does not terminate, enough mass is removed from large spectral directions. Combining with the termination criteria imply that when a vector is returned, it is a close approximation to the top direction of Σ. Details can be found as Lemma 13 and in the proof of Theorem 1. 4 Schatten packing For our second robust PCA algorithm, developed in Section 5, we require a key technical tool which we now develop in this section. The tool, Schatten-norm packing semidefinite programs (and hybrid-norm extensions), is a smoothed generalization of the classical packing semidefinite program, which may be of independent interest in other applications. At a high level, the reason Schatten packing solvers are useful for the robust PCA problem is because while an adversary can fool a PCA algorithm based on operator-norm semidefinite programs by “promoting” a single other eigenvector to have a larger variance, a p-norm-based semidefinite program forces a tradeoff between the number of directions promoted and the amount of variance introduced. 6Corollary 4 gives a slightly stronger guarantee that reusing samples does not break dependencies of u. 4.1 Mirror descent interpretation of [MRWZ16] We begin by reinterpreting the [MRWZ16] solver, which achieves the state-of-the-art parallel runtime for packing LPs7. An (`∞) packing LP algorithm solves the following decision problem.8. Problem 1 (`∞ packing linear program). Given entrywise nonnegative A ∈ Rd×n≥0 , either find primal solution x ∈ ∆n with ‖Ax‖∞ ≤ 1 + or dual solution y ∈ ∆d with A>y ≥ (1− )1. Algorithm 1 PackingLP(A, ) 1: Input: A ∈ Rd×n≥0 , ∈ [0, 1 2 ] 2: K ← 3 log(d) , η ← K −1, T ← 4 log(d) log(nd/ ) 2 3: [w0]i ← n2d for all i ∈ [n], z ← 0, t← 0 4: while Awt ≤ K1, ‖wt‖1 ≤ K do 5: vt ← exp(Awt)‖exp(Awt)‖1 6: gt ← max(0,1−A>vt) entrywise 7: wt+1 ← wt ◦ (1 + ηgt), z ← z + vt, t← t+ 1 8: if t ≥ T then 9: return y ← 1T z 10: end if 11: end while 12: return x← wt‖wt‖1 The following result is shown in [MRWZ16]. Proposition 1. PackingLP (Algorithm 1) solves Problem 1 in O(nnz(A) · log(d) log(nd/ ) 2 ) time. Our interpretation of the analysis of [MRWZ16] combines two ingredients: a potential argument and mirror descent (alternatively known as the “multiplicative weights” framework), which yields a dual feasible point if ‖wt‖1 did not grow sufficiently. Potential argument. The potential used by [MRWZ16] is log( ∑ j∈[d] exp([Awt]j))− ‖wt‖1, wellknown to be a O(log d)-additive approximation of ‖Awt‖∞−‖wt‖1. As soon as ‖Awt‖∞ or ‖wt‖1 reaches the scale O( log d ), by nonnegativity this becomes a multiplicative guarantee, motivating the setting of threshold K. To prove the potential is monotone, [MRWZ16] uses step size K−1 and a Taylor approximation; combining with the termination condition yields the desired claim. Mirror descent. To certify that wt grows sufficiently (e.g. the method terminates in few iterations, else dual feasibility holds), we interpret the step wt+1 ← wt ◦ (1 + ηgt) as approximate entropic mirror descent. Specifically, we track the quantity ∑ 0≤t<T 〈ηgt, u〉, and show that if ‖wt‖1 has not grown sufficiently, then it must be bounded for every u ∈ ∆n, certifying dual feasibility. Formally, for any gt sequence and u ∈ ∆n, we show O(log(nd/ )) + log ( ‖wT ‖1 ‖w0‖1 ) ≥ ∑ 0≤t<T 〈ηgt, u〉 ≥ η ∑ 0≤t<T 〈 1−A>vt, u 〉 . The last inequality followed by gt being an upwards truncation. If ‖wT ‖1 is bounded (else, we have primal feasibility), we show the entire above expression is bounded O(log nd ) for any u. Thus, by setting T = O( log(nd/ )η ) and choosing u to be each coordinate indicator, it follows that the average of all vt is coordinatewise at least 1− , and solves Problem 1 as a dual solution. Our gt is the (truncated) gradient of the function used in the potential analysis, so its form allows us to interpret dual feasibility (e.g. vt has `1 norm 1 and is a valid dual point). Our analysis patterns standard mirror descent, complemented by side information which says that lack of a primal solution can transform a regret guarantee into a feasibility bound. We apply this framework to analyze `p 7The [MRWZ16] solver also generalizes to covering and mixed objectives; we focus on packing in this work. 8Packing linear programs are sometimes expressed as the optimization problem maxx≥0,Ax≤1 ‖x‖1, simi- larly to (1); these problems are equivalent up to a standard binary search, see e.g. discussion in [JLL+20]. variants of Problem 1, via different potentials; our proofs are quite straightforward upon adopting this perspective, and we believe it may yield new insights for instances with positivity structure. 4.2 `p-norm packing linear programs In this section, we give an example of the framework proposed in Section 4.1, for approximately solving `p norm packing linear programs. Specifically, we now consider the generalization of Problem 1 to `p norms; throughout, q = pp−1 is the dual norm. Problem 2 (`p packing linear program). Given entrywise nonnegative A ∈ Rd×n≥0 , either find primal solution x ∈ ∆n with ‖Ax‖p ≤ 1 + or dual solution y ∈ Rd≥0, ‖y‖q = 1 with A>y ≥ (1− )1. For p = log d , Problem 2 recovers Problem 1 up to constants as `p multiplicatively approximates `∞ by 1 + . We now state our method for solving Problem 2 as Algorithm 2. Algorithm 2 PNormPacking(A, , p) 1: Input: A ∈ Rd×n≥0 , ∈ [0, 1 2 ], p ≥ 2 2: η ← p−1, T ← 4p log( nd ) 3: [w0]i ← n2d for all i ∈ [n], z ← 0, t← 0 4: while ‖wt‖1 ≤ −1 do 5: gt ← max(0,1−A>(vt)p−1) entrywise, for vt ← Awt‖Awt‖p 6: wt+1 ← wt ◦ (1 + ηgt), z ← z + (vt)p−1, t← t+ 1 7: if t ≥ T then 8: return y = z‖z‖q 9: end if 10: end while 11: return x = wt‖wt‖1 Other than changing parameters, the only difference from Algorithm 1 is that v is a point with unit `q norm induced by the gradient of our potential Φt. We state our main potential fact, whose proof is based straightforwardly on Taylor expanding ‖·‖p, and deferred to Appendix C for brevity. Lemma 3. In all iterations t of Algorithm 2, defining Φt := ‖Awt‖p − ‖wt‖1, Φt+1 ≤ Φt. We now state our main result, which leverages the potential bound following the framework of Section 4.1. A proof can be found in Appendix C. Theorem 4. Algorithm 2 runs in time O(nnz(A) · p log(nd/ ) ). Further, its output solves Problem 2. 4.3 Schatten-norm packing semidefinite programs We generalize Algorithm 2 to solve Schatten packing semidefinite programs, which we now define. Problem 3. Given {Ai}i∈[n] ∈ Sd≥0, either find primal solution x ∈ ∆n with ∥∥∥∑i∈[n] xiAi∥∥∥ p ≤ 1 + or dual solution Y ∈ Sd≥0, ‖Y‖q = 1 with 〈Ai,Y〉 ≥ 1− for all i ∈ [n]. We assume that p is an odd integer for simplicity (sufficient for our applications), and leave for interesting future work the cases when p is even or noninteger. The potential used in the analysis and an overall guarantee are stated here, and deferred to Appendix C. The proofs are simple modifications of Lemma 3 and Theorem 4 using trace inequalities (similar to those in [JLL+20]) in place of scalar inequalities, as well as efficient approximation of quantities in Line 5 via the standard technique of Johnson-Lindestrauss projections. Lemma 4. In all iterations t of Algorithm 3, defining Φt := ∥∥∥∑i∈[n][wt]iAi∥∥∥ p −‖wt‖1, Φt+1 ≤ Φt. Algorithm 3 SchattenPacking({Ai}i∈[n], , p) 1: Input: {Ai}i∈[n] ∈ Sd≥0, ∈ [0, 12 ], p ≥ 2 2: η ← p−1, T ← 4p log( nd ) 3: [w0]i ← n2d for all i ∈ [n], z ← 0 4: while ‖wt‖1 ≤ −1 do 5: gt ← max ( 0,1− 〈 Ai,V p−1 t 〉) entrywise, for Vt ← ∑ i∈[n][wt]iAi ‖∑i∈[n][wt]iAi‖p 6: wt+1 ← wt ◦ (1 + ηgt), Z← Z + (Vt)p−1, t← t+ 1 7: if t ≥ T then 8: return Y = Z‖Z‖q 9: end if 10: end while 11: return x = wt‖wt‖1 Theorem 5. Let p be odd. Algorithm 3 runs in O(p log(nd/ ) ) iterations, and its output solves Problem 3. Each iteration is implementable in O(nnz · p log(nd/ ) 2 ), where nnz is the number of nonzero entries amongst all {Ai}i∈[n], losing O( ) in the quality of Problem 3 with probability 1− poly((nd/ )−1). 4.4 Schatten packing with a `∞ constraint We remark that the framework outlined in Section 4.1 is flexible enough to handle mixed-norm packing problems. Specifically, developments in Section 5 require the following guarantee. Proposition 2. Following Theorem 5’s notation, let p be odd, {Ai}i∈[n] ∈ Sd≥0, 0 < = O(α), and min x∈∆n ‖x‖∞≤ 1+α n ‖A(x)‖p = OPT. (4) for A(x) := ∑ i∈[n] xiAi. Given estimate of OPT exponentially bounded in nd , there is a procedure calling Algorithm 7 O(log nd ) times giving x ∈ ∆ n with ‖x‖∞ ≤ (1+α)(1+ ) n , ‖A(x)‖p ≤ (1 + )OPT. Algorithm 7 runs in O( log(nd/ ) logn 2 ) iterations, each requiring time O(nnz · p log(nd/ ) 2 ). Our method, found in Appendix C, approximately solves (4) by first applying a standard binary search to placeA(x) on the right scale, for which it suffices to solve an approximate decision problem. Then, we apply a truncated mirror descent procedure on the potential Φ(w) = log(exp(‖A(w)‖p) + exp( n1+α ‖w‖∞)) − ‖w‖1, and prove correctness for solving the decision problem following the framework we outlined in Section 4.1. 5 Robust sub-Gaussian PCA in nearly-linear time We give our nearly-linear time robust PCA method, leveraging developments of Section 4. Throughout, we will be operating under Assumption 1, for some corruption parameter with log −1 log d = O(1); = O( 1log d log log d ) suffices. We now develop tools to prove Theorem 2. Algorithm 4 uses three subroutines: our earlier 1DRobustVariance method (Lemma 1), an application of our earlier Proposition 2 to approximate the solution to min w∈Sn ∥∥∥∥∥∥ ∑ i∈[n] wiXiX > i ∥∥∥∥∥∥ p , for p = Θ (√ log d log −1 ) , (5) and a method for computing approximate eigenvectors by [MM15] (discussed in Appendix D). Proposition 3. There is an algorithm Power (Algorithm 1, [MM15]), parameterized by t ∈ [d], tolerance ̃ > 0, p ≥ 1, and A ∈ Sd≥0, which outputs orthonormal {zj}j∈[t] with the guarantee∣∣z>j Apzj − λpj (A)∣∣ ≤ ̃λpj (A)∣∣∣z>j Ap−1zj − λp−1j (A)∣∣∣ ≤ ̃λp−1j (A) for all j ∈ [t]. (6) Here, λj(A) is the jth largest eigenvalue of A. The total time required by the method is O(nnz(A) tp log dε ). Algorithm 4 RobustPCA({Xi}i∈[n], , t) 1: Input: {Xi}i∈[n] = O( 1log d log log d ), t ∈ [d] with Σt+1 ≤ (1− γ)Σ for γ in Theorem 2 2: w ← BoxedSchattenPacking (Proposition 2) on {Ai = XiX>i }i∈[n], α← , p as in (5) 3: M = ∑ i∈[n] wiXiX > i 4: {zj}j∈[t] = Power(t, , p,M) 5: αj ← 1DRobustVariance({Xi}i∈[n],M p−1 2 zj/‖M p−1 2 zj‖2, ) for all j ∈ [t] 6: return zj∗ for j∗ = argmaxj∈[t]αj Algorithm 4 is computationally bottlenecked by the application of Proposition 2 on Line 2 and the t calls to 1DRobustVariance on Line 5, from which the runtime guarantee of Theorem 2 follows straightforwardly. To demonstrate correctness, we first certify the quality of the solution to (5). Lemma 5. Let n = Ω ( d+log δ−1 ( log −1)2 ) . With probability 1− δ2 , the uniform distribution over G attains value (1 + ̃2 )‖Σ‖p for objective (5), where ̃ = C ′ log −1 for a universal constant C ′ > 0. The proof of this is similar to results in e.g. [DKK+19, Li18], and combines concentration guarantees with a union bound over all possible corruption sets B. This implies the following immediately, upon applying the guarantees of Proposition 2. Corollary 1. Let w be the output of Line 2 of RobustPCA. Then, we have ‖w‖∞ ≤ 1 (1−2 )n , and∥∥∥∑i∈[n] wiXiX>i ∥∥∥ p ≤ (1 + ̃) ‖Σ‖p under the guarantee of Lemma 5. Let w be the output of the solver. Recall that M = ∑n i=1 wiXiX > i . Additionally, define MG := ∑ i∈G wiXiX > i , wG := ∑ i∈G wi, MB := ∑ i∈B wiXiX > i , wB := ∑ i∈G wi . (7) Notice in particular that M = MG + MB , and that all these matrices are PSD. We next prove the second, crucial fact, which says that MG is a good approximator to Σ in Loewner ordering: Lemma 6. Let n = Ω ( d+log δ−1 ( log −1)2 ) . With probability at least 1− δ2 , (1 + ̃)Σ MG (1− ̃)Σ. The proof combines the strategy in Lemma 5 with the SDP solver guarantee. Perhaps surprisingly, Corollary 1 and Lemma 6 are the only two properties about M that our final analysis of Theorem 2 will need. In particular, we have the following key geometric proposition, which carefully combines trace inequalities to argue that the corrupted points cannot create too many new large eigendirections. Proposition 4. Let M = MG + MB be so that ‖M‖p ≤ (1 + ̃) ‖Σ‖p, MG 0 and MB 0, and so that (1 + ̃)Σ MG (1− ̃)Σ. Following notation of Algorithm 4, let M = ∑ j∈[d] λjvjv > j , Σ = ∑ j∈[d] σjuju > j (8) be sorted eigendecompositions of M and Σ, so λ1 ≥ . . . ≥ λd, and σ1 ≥ . . . ≥ σd. Let γ be as in Theorem 2, and assume σt+1 < (1− γ)σ1. Then, max j∈[t] v>j Σvj ≥ (1− γ) ‖Σ‖∞ . With Proposition 4 in place, the recovery bound of Theorem 2 follows from an exact SVD. We show in Appendix D that the method is robust to approximations of the form (6), yielding our final claim. Broader Impact Our work provides frameworks for learning properties about the covariance of sub-Gaussian distributions which have been corrupted under noise. As a key subroutine, we develop solvers for smoothed positive linear and semidefinite programs. We believe these results are interesting from an academic perspective, e.g. our techniques may be applicable generally for robust statistics and convex optimization researchers. Moreover, because our primary results concern robustness of models to arbitrarily corrupted data, we believe our methods may have practical implications for downstream tasks where protection against a malicious adversary is warranted. Similarly, as our main subroutine is a solver attaining strong computational guarantees for a wider variety of objectives than was previously known, it is possible that our methods can be leveraged to broaden the types of downstream tasks that can be performed. Namely, as `p norm packing linear program solvers have found applications in fair resource allocation, our hope is that our smoothed and mixed-norm guarantee semidefinite solvers can find similar applications in learning algorithms for objectives designed with fairness or privacy in mind.
1. What is the focus and contribution of the paper regarding eigenvector approximation? 2. What are the strengths of the proposed algorithms, particularly their ability to handle sub-Gaussian and \epsilon-corrupted data? 3. What are the weaknesses of the paper, such as the abrupt transition between sections and lack of discussion on space complexity and practicality? 4. Do you have any questions or suggestions regarding the proposed algorithms' performance in real-world scenarios?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper considers the problem of approximating the top eigenvector of covariance matrices when data are sub-Gaussian and \epsilon-corrupted. It proposes two approximation algorithms, one running in polynomial time and the other in nearly linear time, and whose analysis do not require additional structures of moments. The second algorithm is made possible by a novel width-independent SDP solver. Analysis of approximation factors and runtime complexity is provided for both proposed algorithms. Strengths The exposition of the paper is very clear, and the technical details are sound. The proposed robust algorithms for PCA, especially that they are provably correct without artificial structures on moments, is a significant contribution to the ML community. The proposed width-independent SDP solver is also an important development for robust learning and other areas of ML. Weaknesses - The transition between Sections 3 and 4 is quite abrupt. It would be great if the authors could discuss how Schatten packing can be applied to the robust PCA problem in a high level at the beginning of Section 4; - I would like to see some discussions on the space complexity of the proposed SDP solver, and a comparison against other methods; - Some numerical experiments would be helpful to demonstrate the effectiveness and practicality of the proposed algorithms.
NIPS
Title Deep State Space Models for Unconditional Word Generation Abstract Autoregressive feedback is considered a necessity for successful unconditional text generation using stochastic sequence models. However, such feedback is known to introduce systematic biases into the training process and it obscures a principle of generation: committing to global information and forgetting local nuances. We show that a non-autoregressive deep state space model with a clear separation of global and local uncertainty can be built from only two ingredients: An independent noise source and a deterministic transition function. Recent advances on flowbased variational inference can be used to train an evidence lower-bound without resorting to annealing, auxiliary losses or similar measures. The result is a highly interpretable generative model on par with comparable auto-regressive models on the task of word generation. 1 Introduction Deep generative models for sequential data are an active field of research. Generation of text, in particular, remains a challenging and relevant area [HYX+17]. Recurrent neural networks (RNNs) are a common model class, and are typically trained via maximum likelihood [BVV+15] or adversarially [YZWY16, FGD18]. For conditional text generation, the sequence-to-sequence architecture of [SVL14] has proven to be an excellent starting point, leading to significant improvements across a range of tasks, including machine translation [BCB14, VSP+17], text summarization [RCW15], sentence compression [FAC+15] and dialogue systems [SSB+16]. Similarly, RNN language models have been used with success in speech recognition [MKB+10, GJ14]. In all these tasks, generation is conditioned on information that severely narrows down the set of likely sequences. The role of the model is then largely to distribute probability mass within relatively constrained sets of candidates. Our interest is, by contrast, in unconditional or free generation of text via RNNs. We take as point of departure the shortcomings of existing model architectures and training methodologies developed for conditional tasks. These arise from the increased challenges on both, accuracy and coverage. Generating grammatical and coherent text is considerably more difficult without reliance on an acoustic signal or a source sentence, which may constrain, if not determine much of the sentence structure. Moreover, failure to sufficiently capture the variety and variability of data may not surface in conditional tasks, yet is a key desideratum in unconditional text generation. The de facto standard model for text generation is based on the RNN architecture originally proposed by [Gra13] and incorporated as a decoder network in [SVL14]. It evolves a continuous state vector, emitting one symbol at a time, which is then fed back into the state evolution – a property that characterizes the broader class of autoregressive models. However, even in a conditional setting, these RNNs are difficult to train without substitution of previously generated words by ground truth observations during training, a technique generally referred to as teacher forcing [WZ89]. This approach is known to cause biases [RCAZ15a, GLZ+16] that can be detrimental to test time 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. performance, where such nudging is not available and where state trajectories can go astray, requiring ad hoc fixes like beam search [WR16] or scheduled sampling [BVJS15]. Nevertheless, teacher forcing has been carried over to unconditional generation [BVV+15]. Another drawback of autoregressive feedback [Gra13] is in the dual use of a single source of stochasticity. The probabilistic output selection has to account for the local variability in the next token distribution. In addition, it also has to inject a sufficient amount of entropy into the evolution of the state space sequence, which is otherwise deterministic. Such noise injection is known to compete with the explanatory power of autoregressive feedback mechanisms and may result in degenerate, near deterministic models [BVV+15]. As a consequence, there have been a variety of papers that propose deep stochastic state sequence models, which combine stochastic and deterministic dependencies, e.g. [CKD+15, FSPW16], or which make use of auxiliary latent variables [GSC+17], auxiliary losses [SATB17], and annealing schedules [BVV+15]. No canoncial architecture has emerged so far and it remains unclear how the stochasticity in these models can be interpreted and measured. In this paper, we propose a stochastic sequence model that preserves the Markov structure of standard state space models by cleanly separating the stochasticity in the state evolution, injected via a white noise process, from the randomness in the local token generation. We train our model using variational inference (VI) and build upon recent advances in normalizing flows [RM15, KSW16] to define rich enough stochastic state transition functions for both, generation and inference. Our main goal is to investigate the fundamental question of how far one can push such an approach in text generation, and to more deeply understand the role of stochasticity. For that reason, we have used the most basic problem of text generation as our testbed: word morphology, i.e. the mechanisms underlying the formation of words from characters. This enables us to empirically compare our model to autoregressive RNNs on several metrics that are intractable in more complex tasks such as word sequence modeling. 2 Model We argue that text generation is subject to two sorts of uncertainty: Uncertainty about plausible long-term continuations and uncertainty about the emission of the current token. The first reflects the entropy of all things considered “natural language", the second reflects symbolic entropy at a fixed position that arises from ambiguity, (near-)analogies, or a lack of contextual constraints. As a consequence, we cast the emission of a token as a fundamental trade-off between committing and forgetting about information. 2.1 State space model Let us define a state space model with transition function F : Rd × Rd → Rd, (ht, ξt) 7→ ht+1 = F (ht, ξt), ξt iid∼ N (0, I) . (1) F is deterministic, yet driven by a white noise process ξ, and, starting from some h0, defines a homogeneous stochastic process. A local observation model P (wt|ht) generates symbols wt ∈ Σ and is typically realized by a softmax layer with symbol embeddings. The marginal probability of a symbol sequence w = w1:T is obtained by integrating out h = h1:T , P (w) = ∫ T∏ t=1 p(ht|ht−1)P (wt|ht) dh . (2) Here p(ht|ht−1) is defined implicitly by driving F with noise as we will explain in more detail below.1In contrast to common RNN architectures, we have defined F to not include an auto-regressive input, such as wt−1, making potential biases as in teacher-forcing a non-issue. Furthermore, this implements our assumption about the role of entropy and information for generation. The information about the local outcome under P (wt|ht) is not considered in the transition to the next state as there is no feedback. Thus in this model, all entropy about possible sequence continuations must arise from the noise process ξ, which cannot be ignored in a successfully trained model. 1For ease of exposition, we assume fixed length sequences, although in practice one works with end-ofsequence tokens and variable length sequences. The implied generative procedure follows directly from the chain rule. To sample a sequence of observations we (i) sample a white noise sequence ξ = ξ1...T (ii) deterministically compute h = h1...T from h0 and ξ via F and (iii) sample from the observation model ∏T t=1 P (wt|ht). The remainder of this section focuses on how we can define a sufficiently powerful familiy of state evolution functions F and how variational inference can be used for training. 2.2 Variational inference Model-based variational inference (VI) allows us to approximate the marginalization in Eq. (2) by posterior expectations with regard to an inference model q(h|w). It is easy to verify that the true posterior obeys the conditional independences ht ⊥⊥ rest |ht−1,wt:T , which informs our design of the inference model, cf. [FSPW16]: q(h|w) = T∏ t=1 q(ht|ht−1,wt:T ) . (3) This is to say, the previous state is a sufficient summary of the past. Jensen’s inequality then directly implies the evidence lower bound (ELBO) logP (w) ≥ Eq [ logP (w|h) + log p(h) q(h|w) ] =: L = T∑ t=1 Lt (4) Lt := Eq [logP (wt|ht)] + Eq [ log p(ht|ht−1) q(ht|ht−1,wt:T ) ] (5) This is a well-known form, which highlights the per-step balance between prediction quality and the discrepancy between the transition probabilities of the unconditioned generative and the dataconditioned inference models [FSnPW16, CKD+15]. Intuitively, the inference model breaks down the long range dependencies and provides a local training signal to the generative model for a single step transition and a single output generation. Using VI successfully for generating symbol sequences requires parametrizing powerful yet tractable next state transitions. As a minimum requirement, forward sampling and log-likelihood computation need to be available. Extensions of VAEs [RM15, KSW16] have shown that for non-sequential models under certain conditions an invertible function h = f(ξ) can shape moderately complex distributions over ξ into highly complex ones over h, while still providing the operations necessary for efficient VI. The authors show that a bound similar to Eq. (5) can be obtained by using the law of the unconscious statistician [RM15] and a density transformation to express the discrepancy between generative and inference model in terms of ξ instead of h L = Eq(ξ|w) [ logP (w|f(ξ)) + log p(f(ξ)) q(ξ|w) + log |detJf (ξ)| ] (6) This allows the inference model to work with an implicit latent distribution at the price of computing the Jacobian determinant of f . Luckily, there are many choices such that this can be done in O(d) [RM15, DSB16]. 2.3 Training through coupled transition functions We propose to use two separate transition functions Fq and Fg for the inference and the generative model, respectively. Using results from flow-based VAEs we derive an ELBO that reveals the intrinsic coupling of both and expresses the relation of the two as a part of the objective that is determined solely by the data. A shared transition model Fq = Fg constitutes a special case. Two-Flow ELBO For a transition function F as in Eq. (1) fix h = h∗ and define the restriction f(ξ) = F (h, ξ)|h=h∗ . We require that for any h∗, f is a diffeomorphism and thus has a differentiable inverse. In fact, as we work with (possibly) different Fg and Fq for generation and inference, we have restrictions fg and fq, respectively. For better readability we will omit the conditioning variable h∗ in the sequel. By combining the per-step decomposition in (5) with the flow-based ELBO from (6), we get (implicitly setting h∗ = ht−1): Lt = Eq(ξ|w) [ logP (wt|fq(ξt)) + log p(fq(ξt)|ht−1) q(ξt|ht−1;wt:T ) + log ∣∣detJfq (ξt)∣∣] . (7) As our generative model also uses a flow to transform ξt into a distribition on ht, it is more natural to use the (simple) density in ξ-space. Performing another change of variable, this time on the density of the generative model, we get p(ht|ht−1) = p(ζt|ht−1) · |detJf−1g (fq(ξt))| = r(ζt) |detJfg (ζt)| , ζt := (f −1 g ◦ fq)(ξt) (8) where r now is simply the (multivariate) standard normal density as ξt does not depend ht−1, whereas ht does. We have introduced new noise variable ζt = s(ξt) to highlight the importance of the transformation s = f−1g ◦ fq, which is a combined flow of the forward inference flow and the inverse generative flow. Essentially, it follows the suggested ξ-distribution of the inference model into the latent state space and back into the noise space of the generative model with its uninformative distribution. Putting this back into Eq. (7) and exploiting the fact that the Jacobians can be combined via detJs = detJfq/detJfg we finally get Lt = Eq(ξ|w) [ logP (wt|fq(ξt)) + log r(s(ξt)) q(ξt|ht−1;wt:T ) + log |detJs(ξt)| ] . (9) Interpretation Naïvely employing the model-based ELBO approach, one has to learn two independently parametrized transition models p(ht|ht−1) and q(ht|ht−1, wt...T ), one informed about the future and one not. Matching the two then becomes and integral part of the objective. However, since the transition model encapsulates most of the model complexity, this introduces redundancy where the learning problem is most challenging. Nevertheless, generative and inference model do address the transition problem from very different angles. Therefore, forcing both to use the exact same transition model might limit flexibility during training and result in an inferior generative model. Thus our model casts Fg and Fq as independently parametrized functions that are coupled through the objective by treating them as proper transformations of an underlying white noise process. 2 Special cases Additive Gaussian noise ht+1 = ht + ξt can be seen as the simplest form of Fg or, alternatively, as a generative model without flow (as Jfg = I). Of course, repeated addition of noise does not provide a meaningful latent trajectory. Finally, note that for Fg = Fq , s = id and the nominator in the second term becomes a simple prior probability r(ξt), whereas the determinant reduces to a constant. We now explore possible candidates for the flows in Fg and Fq . 2.4 Families of transition functions Since the Jacobian of a composed function factorizes, a flow F is often composed of a chain of individual invertible functions F = Fk ◦ · · · ◦ F1 [RM15]. We experiment with individual functions F (ht−1, ξt) = g(ht−1) + G(ht−1)ξt (10) where g is a multilayer MLP Rd → Rd and G is a neural network Rd → Rd × Rd mapping ht−1 to a lower-triangular d × d matrix with non-zero diagonal entries. Again, we use MLPs for this mapping and clip the diagonal away from [−δ, δ] for some hyper parameter 0 < δ < 0.5. The lower-triangular structure allows computing the determinant in O(d) and stable inversion of the mapping by substitution inO(d2). As a special case we also consider the case when G is restricted to diagonal matrices. Finally, we experiment with a conditional variant of the Real NVP flow [DSB16]. Computing F−1g is central to our objective and we found that depending on the flow actually parametrizing the inverse directly results in more stable and efficient training. 2Note that identifying s as an invertible function allows us to perform a backwards density transformation which cancels the regularizing terms. This is akin to any flow objective (e.g. see equation (15) in[RM15]) where applying the transformation additionally to the prior cancels out the Jacobian term. We can think of s as a stochastic bottleneck with the observation model P (wt|ht) attached to the middle layer. Removing the middle layer collapses the bottleneck and prohibits learning compression. 2.5 Inference network So far we have only motivated the factorization of the inference network q(h|w) =∏ q(ht|ht−1, wt:T ) but treated it as a black-box otherwise. Remember that sampling from the inference network amounts to sampling ξt ∼ q(·|ht−1, wt...T ) and then performing the deterministic transition Fq(ht−1, ξt). We observe much better training stability when conditioning q on the data wt...T only and modeling interaction with ht−1 exclusively through Fq. This coincides with our intuition that the two inputs to a transition function provide semantically orthogonal contributions. We follow existing work [DSB16] and choose q as the density of a normal distribution with diagonal covariance matrix. We follow the idea of [FSPW16] and incorporate the variable-length sequence wt:T by conditioning on the state of an RNN running backwards in time across w1...T . We embed the symbols w1...T in a vector space RdE and use use a GRU cell to produce a sequence of hidden states aT , . . . ,a1 where at has digested tokens wt:T . Together ht−1 and at parametrize the mean and co-variance matrix of q. 2.6 Optimization Except in very specific and simple cases, for instance, a Kalman filter, it will not be possible to efficiently compute the q-expectations in Eq. (5) exactly. Instead, we sample q in every time-step as is common practice for sequential ELBOs [FSnPW16, GSC+17]. The re-parametrization trick allows pushing all necessary gradients through these expectations to optimize the bound via stochastic gradient-based optimization techniques such as Adam [KB14]. 2.7 Extension: Importance-weighted ELBO for tracking the generative model Conceptionally, there are two ways we can imagine an inference network to propose ξ1:T sequences for a given sentence w1:T . Either, as described above, by digesting w1...T right-to-left and proposing ξ1:T left-to-right. Or, by iteratively proposing a ξt taking into account the last state ht−1 proposed and the generative deterministic mechanism Fg. The latter allows the inference network to peek at states ht that Fg could generate from ht−1 before proposing an actual target ht. This allows the inference model to track a multi-modal Fg without need for Fq to match its expressiveness. As a consequence, this might offer the possibility to learn multi-modal generative models, without the need to employ complex multi-modal distributions in the inference model. Our extension is built on importance weighted auto-encoders (IWAE) [BGS15]. The IWAE ELBO is derived by writing the log marginal as a Monte Carlo estimate before using Jensen’s inequality. The result is an ELBO and corresponding gradients of the form3 L = Eh(k) [ log 1 K K∑ k=1 p(w,h(k)) q(h(k)|w)︸ ︷︷ ︸ =:ω(k) ] , ∇L = Eh(k) [ K∑ k=1 ω(k)∑ k′ ω (k′) ∇ logω(k) ] , h(k)∼ q(·|w) (11) The authors motivate (11) as a weighting mechanism relieving the inference model from explaining the data well with every sample. We will use the symmetry of this argument to let the inference model condition on potential next states hgt = Fg(ht−1, ξt), ξt∼N (0, I) from the generative model without requiring every hgt to allow q to make a good proposal. In other words, the K sampled outputs of Fg become a vectorized representation of Fg to condition on. In our sequential model, computing ω(k) exactly is intractable as it would require rolling out the network until time T . Instead, we limit the horizon to only one time-step. Although this biases the estimate of the weights and consequently the ELBO, longer horizons did empirically not show benefits. When proceeding to time-step t+ 1 we choose the new hidden state by sampling h(k) with probability proportionally to ω(k). Algorithm 1 summarizes the steps carried out at time t for a given ht−1 (to not overload the notation, we drop t in hgt) and a more detailed derivation of the bound is given in Appendix A. 3Here we have tacitly assumed that h can be rewritten using the reprametrization trick so that the expectation can be expressed with respect to some parameter-free base-distribution. See [BGS15] for a detailed derivation of the gradients in (11). Algorithm 1 Detailed forward pass with importance weighting Simulate Fg: h (k) g = Fg(ht−1, ξ (k)), where ξ(k) ∼ N (0, I), k = 1, . . . ,K Instantiate the inference family: qk(h) = q(h|h(k)g ,ht−1, wt:T ) Sample inference: h(k) ∼ qk Compute gradients as in (11) where ω(k) = P (wt|h(k))p(h(k)|ht−1)/qk(h(k)) Sample h(k) according to ω(1) . . . ω(K) for the next step. 3 Related Work Our work intersects with work directly addressing teacher-forcing, mostly on language modelling and translation (which are mostly not state space models) and stochastic state space models (which are typically autoregressive and do not address teacher forcing). Early work on addressing teacher-forcing has focused on mitigating its biases by adapting the RNN training procedure to partly rely on the model’s prediction during training [BVJS15, RCAZ15b]. Recently, the problem has been addressed for conditional generation within an adversarial framework [GLZ+16] and in various learning to search frameworks [WR16, LAOL17]. However, by design these models do not perform stochastic state transitions. There have been proposals for hybrid architectures that augment the deterministic RNN state sequences by chains of random variables [CKD+15, FSPW16]. However, these approaches are largely patching-up the output feedback mechanism to allow for better modeling of local correlations, leaving the deterministic skeleton of the RNN state sequence untouched. A recent evolution of deep stochastic sequence models has developed models of ever increasing complexity including intertwined stochastic and deterministic state sequences [CKD+15, FSPW16] additional auxiliary latent variables [GSC+17] auxiliary losses [SATB17] and annealing schedules [BVV+15]. At the same time, it remains often unclear how the stochasticity in these models can be interpreted and measured. Closest in spirit to our transition functions is work by Maximilian et al.[KSBvdS17] on generation with external control inputs. In contrast to us they use a simple mixture of linear transition functions and work around using density transformations akin to [BO14]. In our unconditional regime we found that relating the stochasticity in ξ explicitly to the stochasticity in h is key to successful training. Finally, variational conditioning mechanisms similar in spirit to ours have seen great success in image generation[GDGW15]. Among generative unconditional sequential models GANs are as of today the most prominent architecture [YZWY16, JKMHL16, FGD18, CLZ+17]. To the best of our knowledge, our model is the first non-autoregressive model for sequence generation in a maximum likelihood framework. 4 Evaluation Naturally, the quality of a generative model must be measured in terms of the quality of its outputs. However, we also put special emphasis on investigating whether the stochasticity inherent in our model operates as advertised. 4.1 Data Inspection Evaluating generative models of text is a field of ongoing research and currently used methods range from simple data-space statistics to expensive human evaluation [FGD18]. We argue that for morphology, and in particular non-autoregressive models, there is an interesting middle ground: Compared to the space of all sentences, the space of all words has still moderate cardinality which allows us to estimate the data distribution by unigram word-frequencies. As a consequence, we can reliably approximate the cross-entropy which naturally generalizes data-space metrics to probabilistic models and addresses both, over-generalization (assigning non-zero probability to non-existing words) and over-confidence (distributing high probability mass only among a few words). This metric can be addressed by all models which operate by first stochastically generating a sequence of hidden states and then defining a distribution over the data-space given the state sequence. For our model we approximate the marginal by a Monte Carlo estimate of (2) P (w) = ∫ P (w|h)p(h)dh = 1 K K∑ k=1 P (w|h(k)), h(k) ∼ p(h) (12) Note that sampling from p(h) boils down to sampling ξ1...T from independent standard normals and then applying Fg. In particular, the non-autoregressive property of our model allows us to estimate all words in some set S using K samples each by using only K independent trajectories h overall. Finally, we include two data-space metrics as an intuitive, yet less accurate measure. From a collection of generated words, we estimate (i) the fraction of words that are in the training vocabulary (w ∈ V ) and (ii) the fraction of unique words that are in the training vocabulary (w ∈ V unique).4 4.2 Entropy Inspection We want to go beyond the usual evaluation of existing work on stochastic sequence models and also assess the quality of our noise model. In particular, we are interested in how much information contained in a state ht about the output P (wt|ht) is due to the corresponding noise vector ξt. This is quantified by the mutual information between the noise ξt and the observation wt given the noise ξ1:t−1 that defined the prefix up to time t. Since ht−1 is a deterministic function of ξ1:t−1, we write I(t) = I(wt; ξt|ht−1) = Eht−1 [ H[wt|ht−1]−H[wt|ξt,ht−1] ] ≥ 0 (13) to quantify the dependence between noise and observation at one time-step. For a model ignoring the noise variables, knowledge of ξt does not reduce the uncertainty about wt, so that I(t) = 0. We can use Monte Carlo estimates for all expectations in (13). 5 Experiments 5.1 Dataset and baseline For our experiments, we use the BooksCorpus [KZS+15, ZKZ+15], a freely available collection of novels comprising of almost 1B tokens out of which 1.3M are unique. To filter out artefacts and some very uncommon words found in fiction, we restrict the vocabulary to words of length 2 ≤ l ≤ 12 with at least 10 occurrences that only contain letters resulting in a 143K vocabulary. Besides the standard 10% test-train split at the word level, we also perform a second, alternative split at the vocabulary level. That means, 10 percent of the words, chosen regardless of their frequency, will be unique to the test set. This is motivated by the fact that even a small test-set under the former regime will result in only very few, very unlikely words unique to the test-set. However, generalization to unseen words is the essence of morphology. As an additional metric to measuring generalization in this scenario, we evaluate the generated output under Witten-Bell discounted character n-gram models trained on either the whole corpus or the test data only. Our baseline is a GRU cell and the standard RNN training procedure with teacher-forcing5. Hidden state size and embedding size are identical to our model’s. 5.2 Model parametrization We stick to a standard softmax observation model and instead focus the model design on different combinations of flows for Fg and Fq . We investigate the flow in Equation (10), denoted as TRIL, its diagonal version DIAG and a simple identity ID. We denote repeated application of (independently parametrized) flows as in 2 × TRIL. For the weighted version we use K ∈ {2, 5, 10} samples. In addition, for Fg we experiment with a sequence of Real NVPs with masking dimensions d = 2 . . . 7 (two internal hidden layers of size 8 each). Furthermore, we investigate deviating from the factorization (3) by using a bidirectional RNN conditioning on all w1...T in every timestep. Finally, for the best performing configuration, we also investigate state-sizes d = {16, 32}. 4Note that for both data-space metrics there is a trivial generation system that achieves a ‘perfect’ score. Hence, both must be taken into account at the same time to judge performance. 5It should be noted that despite the greatly reduced vocabulary in character-level generation, RNN training without teacher-forcing for our data still fails miserably. 5.3 Results Table 1 shows the result for the standard split. By ± we indicate mean and standard deviation across 5 or 10 (for IWAE) identical runs6. The data-space metrics require manually trading off precision and coverage. We observe that two layers of the TRIL flow improve performance. Furthermore, importance weighting significantly improves the results across all metrics with diminishing returns at K = 10. Its effectiveness is also confirmed by an increase in variance across the weights ω1 . . . ωT during training which can be attributed to the significance of the noise model (see 5.4 for more details). We found training with REAL-NVP to be very unstable. We attribute the relatively poor performance of NVP to the sequential VI setting which deviates heavily from what it was designed for and keep adaptions for future work. Model H[Ptrain, P̂ ] H[Ptest, P̂ ] w ∈ V unique w ∈ V Ī TRIL 12.13±.11 11.99±.11 0.18±.00 0.43±.03 0.95±.04 TRIL, K=2 11.76±.12 11.82±.12 0.16±.01 0.46±.02 1.06±.16 TRIL, K=5 11.46±.05 11.51±.05 0.16±.01 0.48±.02 1.08±.13 TRIL, K=10 11.43±.05 11.47±.05 0.16±.01 0.49±.02 1.12±.12 2×TRIL 11.91±.08 11.86±.13 0.17±.01 0.45±.02 0.89±.07 2×TRIL, K=2 11.55±.09 11.61±.09 0.16±.00 0.47±.01 1.00±.13 2×TRIL, K=5 11.42±.07 11.46±.06 0.16±.00 0.49±.01 1.20±.12 2×TRIL, K=10 11.33±.05 11.38±.06 0.16±.00 0.49±.01 1.28±.13 2×TRIL, K=10, BIDI 11.33±.09 11.39±.10 0.16±.01 0.48±.00 1.25±.16 d = 16 2×TRIL, K=10 11.21 11.43 0.15 0.48 1.43 d = 32 2×TRIL, K=10 11.27 11.13 0.15 0.50 1.31 REAL-NVP-[2,3,4,5,6,7] 11.77 11.81 0.12 0.53 0.94 BASELINE-8D 12.92 12.97 0.13 0.53 – BASELINE-16D 12.55 12.60 0.14 0.62 – ORACLE-TRAIN 7.0 7.027 0.27 1.0 – Table 1: Results on generation. The cross entropy is computed wrt. both training and test set. ORACLE-TRAIN is a model sampling from the training data. Interestingly, our standard inference model is on par with the equivalently parametrized bidirectional inference model suggesting that historic information can be sufficiently stored in the states and confirming d-separation as the right principle for inference design. The poor cross-entropy achieved by the baseline can partly be explained by the fact that autoregressive RNNs are trained on conditional next-word-predictions. Estimating the real data-space distribution would require aggregating over all possible sequences w ∈ V T . However, the data-space metrics clearly show that the performance cannot solely be attributed to this. Table 2 shows that generalization for the alternative split is indeed harder but cross entropy results carry over from the standard setting. Here we sample trajectories and extract the argmax from the observation model which resembles more closely the procedure of the baseline. Under n-gram perplexity both models are on par with a slight advantage of the baseline on longer n-grams and slightly better generalization of our proposed model. n-gram from train+test n-gram from test Model H[Ptrain, P̂ ] H[Ptest, P̂ ] P2 P3 P4 P5 P2 P3 P4 P5 2×TRIL, K=10 11.56 12.27 10.4 12.8 20.9 30.7 13.1 21.9 49.6 81.1 BASELINE-8D 12.90 13.67 11.4 12.1 17.5 24.8 14.5 22.7 48.3 80.5 ORACLE-TRAIN – – 10.1 6.7 4.8 4.1 13.2 15.7 21.4 26.4 ORACLE-TEST – – 9.5 6.0 4.5 3.9 7.9 4.1 2.9 2.6 Table 2: Results for the alternative data split: Cross entropy and perplexity under n = 2, 3, 4, 5-gram language models estimated on either the full corpus or the test set only. To give more insight into how the transition functions influence the results, Table 1a presents an exhaustive overview for all combinations of our simple flows. We observe that a powerful generative 6Single best model with d = 8: 2× TRIL,K = 10 achieved H[Ptrain, P̂ ] = 11.26 and H[Ptest, P̂ ] = 11.28. 7Note that the training-set oracle is not optimal for the test set. The entropy of the test set is 6.80. flow is essential for successful models while the inference flow can remain relatively simple – yet simplistic choices, such as ID degrade performance. Choosing Fg slightly more powerful than Fq emerges as a successful pattern. 5.4 Noise Model Analysis We use K = 20 samples to approximate the entropy terms in (13). In addition we denote by Ī the average mutual information across all time-steps. Figure 3 shows how Ī along with the symbolic entropy H[wt|ht] changes during training. Remember that in a non-autoregressive model, the latter corresponds to information that cannot be recovered in later timesteps. Over the course of the training, more and more information is driven by ξt and absorbed into states ht where it can be stored. Figures 1 and 1b show Ī for all trained models. In addition, Figure 3 shows a box-plot of I(t) for each t = 1 . . . T for the configuration 2×TRIL, K=10. As initial tokens are more important to remember, it should not come as a surprise that I(t) is largest first and decreases over time, yet with increased variance. 1 2 3 4 5 6 7 8 9 10 0 1 2 word position t = 1 . . . T bi ts Figure 2: Noise mutual information I(t) over sequence position t = 1 . . . T . 0 2 4 training time bi ts Ī H[wt|ht] baseline Figure 3: Entropy analysis over training time. For reference the dashed line indicates the overall word entropy of the trained baseline. 6 Conclusion In this paper we have shown how a deep state space model can be defined and trained with the help of variational flows. The recurrent mechanism is driven purely by a simple white noise process and does not require an autoregressive conditioning on previously generated symbols. In addition, we have shown how an importance-weighted conditioning mechanism integrated into the objective allows shifting stochastic complexity from the inference to the generative model. The result is a highly flexible framework for sequence generation with an extremely simple overall architecture, a measurable notion of latent information and no need for pre-training, annealing or auxiliary losses. We believe that pushing the boundaries of non-autoregressive modeling is key to understanding stochastic text generation and can open the door to related fields such as particle filtering [NLRB17, MLT+17].
1. What is the main contribution of the paper, and how does it address a common problem in sequence generation? 2. What are the strengths and weaknesses of the proposed model, particularly regarding its elegance, thoroughness, and comparison to autoregressive baselines? 3. How does the reviewer assess the clarity and quality of the paper's content, especially regarding the description of normalizing flows and typos? 4. How does the reviewer evaluate the originality and significance of the paper's method, particularly its coupled transition functions, compared to existing works in non-autoregressive sequence generation?
Review
Review This paper proposes a non-autoregressive state space model for sequence generation. It builds on flow-based variational inference and uses an importance weighted evidence lower bound to shift the complexity from the generative to the inference model. The paper then performs evaluation on word generation from the BooksCorpus dataset. Quality: Strengths: The paper is well-motivated and tackles a common problem: non-autoregressive generation of sequences. The model is elegant and the results are thorough and strong compared to the autoregressive baseline. The graphs of the entropy gap and entropy analysis are useful. Weaknesses: The limit in word length to be at most length 12 makes it hard to know if this approach generalizes to longer sequences. For example, at what sequence lengths does this start to break down? What is the cross-entropy of the baseline that was trained without autoregressive inputs? The network sizes are quite small. Only state sizes of 8 and 16 are considered which are much smaller than standard models. Why weren't larger state sizes used? Table 2 seems like it should also include results from Real-NVP. Clarity: Weaknesses: The description of normalizing flows in the last part of section 2.2 could be much improved. Lin 27: 'which may constraint' -> which may constrain Line 34: Typo: prevoiusly Line 46: determinstic Footnote 2: Jaccobian Algorithm 1: Details -> Detailed Line 212: emphasise -> emphasis Originality: The method is relatively original and the coupled transition functions are quite interesting and are novel compared to existing work. Significance: The work has significance for the growing area of non-autoregressive sequence generation. === POST-REBUTTAL === Thanks for responding to my questions. I have kept my accept score.
NIPS
Title Deep State Space Models for Unconditional Word Generation Abstract Autoregressive feedback is considered a necessity for successful unconditional text generation using stochastic sequence models. However, such feedback is known to introduce systematic biases into the training process and it obscures a principle of generation: committing to global information and forgetting local nuances. We show that a non-autoregressive deep state space model with a clear separation of global and local uncertainty can be built from only two ingredients: An independent noise source and a deterministic transition function. Recent advances on flowbased variational inference can be used to train an evidence lower-bound without resorting to annealing, auxiliary losses or similar measures. The result is a highly interpretable generative model on par with comparable auto-regressive models on the task of word generation. 1 Introduction Deep generative models for sequential data are an active field of research. Generation of text, in particular, remains a challenging and relevant area [HYX+17]. Recurrent neural networks (RNNs) are a common model class, and are typically trained via maximum likelihood [BVV+15] or adversarially [YZWY16, FGD18]. For conditional text generation, the sequence-to-sequence architecture of [SVL14] has proven to be an excellent starting point, leading to significant improvements across a range of tasks, including machine translation [BCB14, VSP+17], text summarization [RCW15], sentence compression [FAC+15] and dialogue systems [SSB+16]. Similarly, RNN language models have been used with success in speech recognition [MKB+10, GJ14]. In all these tasks, generation is conditioned on information that severely narrows down the set of likely sequences. The role of the model is then largely to distribute probability mass within relatively constrained sets of candidates. Our interest is, by contrast, in unconditional or free generation of text via RNNs. We take as point of departure the shortcomings of existing model architectures and training methodologies developed for conditional tasks. These arise from the increased challenges on both, accuracy and coverage. Generating grammatical and coherent text is considerably more difficult without reliance on an acoustic signal or a source sentence, which may constrain, if not determine much of the sentence structure. Moreover, failure to sufficiently capture the variety and variability of data may not surface in conditional tasks, yet is a key desideratum in unconditional text generation. The de facto standard model for text generation is based on the RNN architecture originally proposed by [Gra13] and incorporated as a decoder network in [SVL14]. It evolves a continuous state vector, emitting one symbol at a time, which is then fed back into the state evolution – a property that characterizes the broader class of autoregressive models. However, even in a conditional setting, these RNNs are difficult to train without substitution of previously generated words by ground truth observations during training, a technique generally referred to as teacher forcing [WZ89]. This approach is known to cause biases [RCAZ15a, GLZ+16] that can be detrimental to test time 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. performance, where such nudging is not available and where state trajectories can go astray, requiring ad hoc fixes like beam search [WR16] or scheduled sampling [BVJS15]. Nevertheless, teacher forcing has been carried over to unconditional generation [BVV+15]. Another drawback of autoregressive feedback [Gra13] is in the dual use of a single source of stochasticity. The probabilistic output selection has to account for the local variability in the next token distribution. In addition, it also has to inject a sufficient amount of entropy into the evolution of the state space sequence, which is otherwise deterministic. Such noise injection is known to compete with the explanatory power of autoregressive feedback mechanisms and may result in degenerate, near deterministic models [BVV+15]. As a consequence, there have been a variety of papers that propose deep stochastic state sequence models, which combine stochastic and deterministic dependencies, e.g. [CKD+15, FSPW16], or which make use of auxiliary latent variables [GSC+17], auxiliary losses [SATB17], and annealing schedules [BVV+15]. No canoncial architecture has emerged so far and it remains unclear how the stochasticity in these models can be interpreted and measured. In this paper, we propose a stochastic sequence model that preserves the Markov structure of standard state space models by cleanly separating the stochasticity in the state evolution, injected via a white noise process, from the randomness in the local token generation. We train our model using variational inference (VI) and build upon recent advances in normalizing flows [RM15, KSW16] to define rich enough stochastic state transition functions for both, generation and inference. Our main goal is to investigate the fundamental question of how far one can push such an approach in text generation, and to more deeply understand the role of stochasticity. For that reason, we have used the most basic problem of text generation as our testbed: word morphology, i.e. the mechanisms underlying the formation of words from characters. This enables us to empirically compare our model to autoregressive RNNs on several metrics that are intractable in more complex tasks such as word sequence modeling. 2 Model We argue that text generation is subject to two sorts of uncertainty: Uncertainty about plausible long-term continuations and uncertainty about the emission of the current token. The first reflects the entropy of all things considered “natural language", the second reflects symbolic entropy at a fixed position that arises from ambiguity, (near-)analogies, or a lack of contextual constraints. As a consequence, we cast the emission of a token as a fundamental trade-off between committing and forgetting about information. 2.1 State space model Let us define a state space model with transition function F : Rd × Rd → Rd, (ht, ξt) 7→ ht+1 = F (ht, ξt), ξt iid∼ N (0, I) . (1) F is deterministic, yet driven by a white noise process ξ, and, starting from some h0, defines a homogeneous stochastic process. A local observation model P (wt|ht) generates symbols wt ∈ Σ and is typically realized by a softmax layer with symbol embeddings. The marginal probability of a symbol sequence w = w1:T is obtained by integrating out h = h1:T , P (w) = ∫ T∏ t=1 p(ht|ht−1)P (wt|ht) dh . (2) Here p(ht|ht−1) is defined implicitly by driving F with noise as we will explain in more detail below.1In contrast to common RNN architectures, we have defined F to not include an auto-regressive input, such as wt−1, making potential biases as in teacher-forcing a non-issue. Furthermore, this implements our assumption about the role of entropy and information for generation. The information about the local outcome under P (wt|ht) is not considered in the transition to the next state as there is no feedback. Thus in this model, all entropy about possible sequence continuations must arise from the noise process ξ, which cannot be ignored in a successfully trained model. 1For ease of exposition, we assume fixed length sequences, although in practice one works with end-ofsequence tokens and variable length sequences. The implied generative procedure follows directly from the chain rule. To sample a sequence of observations we (i) sample a white noise sequence ξ = ξ1...T (ii) deterministically compute h = h1...T from h0 and ξ via F and (iii) sample from the observation model ∏T t=1 P (wt|ht). The remainder of this section focuses on how we can define a sufficiently powerful familiy of state evolution functions F and how variational inference can be used for training. 2.2 Variational inference Model-based variational inference (VI) allows us to approximate the marginalization in Eq. (2) by posterior expectations with regard to an inference model q(h|w). It is easy to verify that the true posterior obeys the conditional independences ht ⊥⊥ rest |ht−1,wt:T , which informs our design of the inference model, cf. [FSPW16]: q(h|w) = T∏ t=1 q(ht|ht−1,wt:T ) . (3) This is to say, the previous state is a sufficient summary of the past. Jensen’s inequality then directly implies the evidence lower bound (ELBO) logP (w) ≥ Eq [ logP (w|h) + log p(h) q(h|w) ] =: L = T∑ t=1 Lt (4) Lt := Eq [logP (wt|ht)] + Eq [ log p(ht|ht−1) q(ht|ht−1,wt:T ) ] (5) This is a well-known form, which highlights the per-step balance between prediction quality and the discrepancy between the transition probabilities of the unconditioned generative and the dataconditioned inference models [FSnPW16, CKD+15]. Intuitively, the inference model breaks down the long range dependencies and provides a local training signal to the generative model for a single step transition and a single output generation. Using VI successfully for generating symbol sequences requires parametrizing powerful yet tractable next state transitions. As a minimum requirement, forward sampling and log-likelihood computation need to be available. Extensions of VAEs [RM15, KSW16] have shown that for non-sequential models under certain conditions an invertible function h = f(ξ) can shape moderately complex distributions over ξ into highly complex ones over h, while still providing the operations necessary for efficient VI. The authors show that a bound similar to Eq. (5) can be obtained by using the law of the unconscious statistician [RM15] and a density transformation to express the discrepancy between generative and inference model in terms of ξ instead of h L = Eq(ξ|w) [ logP (w|f(ξ)) + log p(f(ξ)) q(ξ|w) + log |detJf (ξ)| ] (6) This allows the inference model to work with an implicit latent distribution at the price of computing the Jacobian determinant of f . Luckily, there are many choices such that this can be done in O(d) [RM15, DSB16]. 2.3 Training through coupled transition functions We propose to use two separate transition functions Fq and Fg for the inference and the generative model, respectively. Using results from flow-based VAEs we derive an ELBO that reveals the intrinsic coupling of both and expresses the relation of the two as a part of the objective that is determined solely by the data. A shared transition model Fq = Fg constitutes a special case. Two-Flow ELBO For a transition function F as in Eq. (1) fix h = h∗ and define the restriction f(ξ) = F (h, ξ)|h=h∗ . We require that for any h∗, f is a diffeomorphism and thus has a differentiable inverse. In fact, as we work with (possibly) different Fg and Fq for generation and inference, we have restrictions fg and fq, respectively. For better readability we will omit the conditioning variable h∗ in the sequel. By combining the per-step decomposition in (5) with the flow-based ELBO from (6), we get (implicitly setting h∗ = ht−1): Lt = Eq(ξ|w) [ logP (wt|fq(ξt)) + log p(fq(ξt)|ht−1) q(ξt|ht−1;wt:T ) + log ∣∣detJfq (ξt)∣∣] . (7) As our generative model also uses a flow to transform ξt into a distribition on ht, it is more natural to use the (simple) density in ξ-space. Performing another change of variable, this time on the density of the generative model, we get p(ht|ht−1) = p(ζt|ht−1) · |detJf−1g (fq(ξt))| = r(ζt) |detJfg (ζt)| , ζt := (f −1 g ◦ fq)(ξt) (8) where r now is simply the (multivariate) standard normal density as ξt does not depend ht−1, whereas ht does. We have introduced new noise variable ζt = s(ξt) to highlight the importance of the transformation s = f−1g ◦ fq, which is a combined flow of the forward inference flow and the inverse generative flow. Essentially, it follows the suggested ξ-distribution of the inference model into the latent state space and back into the noise space of the generative model with its uninformative distribution. Putting this back into Eq. (7) and exploiting the fact that the Jacobians can be combined via detJs = detJfq/detJfg we finally get Lt = Eq(ξ|w) [ logP (wt|fq(ξt)) + log r(s(ξt)) q(ξt|ht−1;wt:T ) + log |detJs(ξt)| ] . (9) Interpretation Naïvely employing the model-based ELBO approach, one has to learn two independently parametrized transition models p(ht|ht−1) and q(ht|ht−1, wt...T ), one informed about the future and one not. Matching the two then becomes and integral part of the objective. However, since the transition model encapsulates most of the model complexity, this introduces redundancy where the learning problem is most challenging. Nevertheless, generative and inference model do address the transition problem from very different angles. Therefore, forcing both to use the exact same transition model might limit flexibility during training and result in an inferior generative model. Thus our model casts Fg and Fq as independently parametrized functions that are coupled through the objective by treating them as proper transformations of an underlying white noise process. 2 Special cases Additive Gaussian noise ht+1 = ht + ξt can be seen as the simplest form of Fg or, alternatively, as a generative model without flow (as Jfg = I). Of course, repeated addition of noise does not provide a meaningful latent trajectory. Finally, note that for Fg = Fq , s = id and the nominator in the second term becomes a simple prior probability r(ξt), whereas the determinant reduces to a constant. We now explore possible candidates for the flows in Fg and Fq . 2.4 Families of transition functions Since the Jacobian of a composed function factorizes, a flow F is often composed of a chain of individual invertible functions F = Fk ◦ · · · ◦ F1 [RM15]. We experiment with individual functions F (ht−1, ξt) = g(ht−1) + G(ht−1)ξt (10) where g is a multilayer MLP Rd → Rd and G is a neural network Rd → Rd × Rd mapping ht−1 to a lower-triangular d × d matrix with non-zero diagonal entries. Again, we use MLPs for this mapping and clip the diagonal away from [−δ, δ] for some hyper parameter 0 < δ < 0.5. The lower-triangular structure allows computing the determinant in O(d) and stable inversion of the mapping by substitution inO(d2). As a special case we also consider the case when G is restricted to diagonal matrices. Finally, we experiment with a conditional variant of the Real NVP flow [DSB16]. Computing F−1g is central to our objective and we found that depending on the flow actually parametrizing the inverse directly results in more stable and efficient training. 2Note that identifying s as an invertible function allows us to perform a backwards density transformation which cancels the regularizing terms. This is akin to any flow objective (e.g. see equation (15) in[RM15]) where applying the transformation additionally to the prior cancels out the Jacobian term. We can think of s as a stochastic bottleneck with the observation model P (wt|ht) attached to the middle layer. Removing the middle layer collapses the bottleneck and prohibits learning compression. 2.5 Inference network So far we have only motivated the factorization of the inference network q(h|w) =∏ q(ht|ht−1, wt:T ) but treated it as a black-box otherwise. Remember that sampling from the inference network amounts to sampling ξt ∼ q(·|ht−1, wt...T ) and then performing the deterministic transition Fq(ht−1, ξt). We observe much better training stability when conditioning q on the data wt...T only and modeling interaction with ht−1 exclusively through Fq. This coincides with our intuition that the two inputs to a transition function provide semantically orthogonal contributions. We follow existing work [DSB16] and choose q as the density of a normal distribution with diagonal covariance matrix. We follow the idea of [FSPW16] and incorporate the variable-length sequence wt:T by conditioning on the state of an RNN running backwards in time across w1...T . We embed the symbols w1...T in a vector space RdE and use use a GRU cell to produce a sequence of hidden states aT , . . . ,a1 where at has digested tokens wt:T . Together ht−1 and at parametrize the mean and co-variance matrix of q. 2.6 Optimization Except in very specific and simple cases, for instance, a Kalman filter, it will not be possible to efficiently compute the q-expectations in Eq. (5) exactly. Instead, we sample q in every time-step as is common practice for sequential ELBOs [FSnPW16, GSC+17]. The re-parametrization trick allows pushing all necessary gradients through these expectations to optimize the bound via stochastic gradient-based optimization techniques such as Adam [KB14]. 2.7 Extension: Importance-weighted ELBO for tracking the generative model Conceptionally, there are two ways we can imagine an inference network to propose ξ1:T sequences for a given sentence w1:T . Either, as described above, by digesting w1...T right-to-left and proposing ξ1:T left-to-right. Or, by iteratively proposing a ξt taking into account the last state ht−1 proposed and the generative deterministic mechanism Fg. The latter allows the inference network to peek at states ht that Fg could generate from ht−1 before proposing an actual target ht. This allows the inference model to track a multi-modal Fg without need for Fq to match its expressiveness. As a consequence, this might offer the possibility to learn multi-modal generative models, without the need to employ complex multi-modal distributions in the inference model. Our extension is built on importance weighted auto-encoders (IWAE) [BGS15]. The IWAE ELBO is derived by writing the log marginal as a Monte Carlo estimate before using Jensen’s inequality. The result is an ELBO and corresponding gradients of the form3 L = Eh(k) [ log 1 K K∑ k=1 p(w,h(k)) q(h(k)|w)︸ ︷︷ ︸ =:ω(k) ] , ∇L = Eh(k) [ K∑ k=1 ω(k)∑ k′ ω (k′) ∇ logω(k) ] , h(k)∼ q(·|w) (11) The authors motivate (11) as a weighting mechanism relieving the inference model from explaining the data well with every sample. We will use the symmetry of this argument to let the inference model condition on potential next states hgt = Fg(ht−1, ξt), ξt∼N (0, I) from the generative model without requiring every hgt to allow q to make a good proposal. In other words, the K sampled outputs of Fg become a vectorized representation of Fg to condition on. In our sequential model, computing ω(k) exactly is intractable as it would require rolling out the network until time T . Instead, we limit the horizon to only one time-step. Although this biases the estimate of the weights and consequently the ELBO, longer horizons did empirically not show benefits. When proceeding to time-step t+ 1 we choose the new hidden state by sampling h(k) with probability proportionally to ω(k). Algorithm 1 summarizes the steps carried out at time t for a given ht−1 (to not overload the notation, we drop t in hgt) and a more detailed derivation of the bound is given in Appendix A. 3Here we have tacitly assumed that h can be rewritten using the reprametrization trick so that the expectation can be expressed with respect to some parameter-free base-distribution. See [BGS15] for a detailed derivation of the gradients in (11). Algorithm 1 Detailed forward pass with importance weighting Simulate Fg: h (k) g = Fg(ht−1, ξ (k)), where ξ(k) ∼ N (0, I), k = 1, . . . ,K Instantiate the inference family: qk(h) = q(h|h(k)g ,ht−1, wt:T ) Sample inference: h(k) ∼ qk Compute gradients as in (11) where ω(k) = P (wt|h(k))p(h(k)|ht−1)/qk(h(k)) Sample h(k) according to ω(1) . . . ω(K) for the next step. 3 Related Work Our work intersects with work directly addressing teacher-forcing, mostly on language modelling and translation (which are mostly not state space models) and stochastic state space models (which are typically autoregressive and do not address teacher forcing). Early work on addressing teacher-forcing has focused on mitigating its biases by adapting the RNN training procedure to partly rely on the model’s prediction during training [BVJS15, RCAZ15b]. Recently, the problem has been addressed for conditional generation within an adversarial framework [GLZ+16] and in various learning to search frameworks [WR16, LAOL17]. However, by design these models do not perform stochastic state transitions. There have been proposals for hybrid architectures that augment the deterministic RNN state sequences by chains of random variables [CKD+15, FSPW16]. However, these approaches are largely patching-up the output feedback mechanism to allow for better modeling of local correlations, leaving the deterministic skeleton of the RNN state sequence untouched. A recent evolution of deep stochastic sequence models has developed models of ever increasing complexity including intertwined stochastic and deterministic state sequences [CKD+15, FSPW16] additional auxiliary latent variables [GSC+17] auxiliary losses [SATB17] and annealing schedules [BVV+15]. At the same time, it remains often unclear how the stochasticity in these models can be interpreted and measured. Closest in spirit to our transition functions is work by Maximilian et al.[KSBvdS17] on generation with external control inputs. In contrast to us they use a simple mixture of linear transition functions and work around using density transformations akin to [BO14]. In our unconditional regime we found that relating the stochasticity in ξ explicitly to the stochasticity in h is key to successful training. Finally, variational conditioning mechanisms similar in spirit to ours have seen great success in image generation[GDGW15]. Among generative unconditional sequential models GANs are as of today the most prominent architecture [YZWY16, JKMHL16, FGD18, CLZ+17]. To the best of our knowledge, our model is the first non-autoregressive model for sequence generation in a maximum likelihood framework. 4 Evaluation Naturally, the quality of a generative model must be measured in terms of the quality of its outputs. However, we also put special emphasis on investigating whether the stochasticity inherent in our model operates as advertised. 4.1 Data Inspection Evaluating generative models of text is a field of ongoing research and currently used methods range from simple data-space statistics to expensive human evaluation [FGD18]. We argue that for morphology, and in particular non-autoregressive models, there is an interesting middle ground: Compared to the space of all sentences, the space of all words has still moderate cardinality which allows us to estimate the data distribution by unigram word-frequencies. As a consequence, we can reliably approximate the cross-entropy which naturally generalizes data-space metrics to probabilistic models and addresses both, over-generalization (assigning non-zero probability to non-existing words) and over-confidence (distributing high probability mass only among a few words). This metric can be addressed by all models which operate by first stochastically generating a sequence of hidden states and then defining a distribution over the data-space given the state sequence. For our model we approximate the marginal by a Monte Carlo estimate of (2) P (w) = ∫ P (w|h)p(h)dh = 1 K K∑ k=1 P (w|h(k)), h(k) ∼ p(h) (12) Note that sampling from p(h) boils down to sampling ξ1...T from independent standard normals and then applying Fg. In particular, the non-autoregressive property of our model allows us to estimate all words in some set S using K samples each by using only K independent trajectories h overall. Finally, we include two data-space metrics as an intuitive, yet less accurate measure. From a collection of generated words, we estimate (i) the fraction of words that are in the training vocabulary (w ∈ V ) and (ii) the fraction of unique words that are in the training vocabulary (w ∈ V unique).4 4.2 Entropy Inspection We want to go beyond the usual evaluation of existing work on stochastic sequence models and also assess the quality of our noise model. In particular, we are interested in how much information contained in a state ht about the output P (wt|ht) is due to the corresponding noise vector ξt. This is quantified by the mutual information between the noise ξt and the observation wt given the noise ξ1:t−1 that defined the prefix up to time t. Since ht−1 is a deterministic function of ξ1:t−1, we write I(t) = I(wt; ξt|ht−1) = Eht−1 [ H[wt|ht−1]−H[wt|ξt,ht−1] ] ≥ 0 (13) to quantify the dependence between noise and observation at one time-step. For a model ignoring the noise variables, knowledge of ξt does not reduce the uncertainty about wt, so that I(t) = 0. We can use Monte Carlo estimates for all expectations in (13). 5 Experiments 5.1 Dataset and baseline For our experiments, we use the BooksCorpus [KZS+15, ZKZ+15], a freely available collection of novels comprising of almost 1B tokens out of which 1.3M are unique. To filter out artefacts and some very uncommon words found in fiction, we restrict the vocabulary to words of length 2 ≤ l ≤ 12 with at least 10 occurrences that only contain letters resulting in a 143K vocabulary. Besides the standard 10% test-train split at the word level, we also perform a second, alternative split at the vocabulary level. That means, 10 percent of the words, chosen regardless of their frequency, will be unique to the test set. This is motivated by the fact that even a small test-set under the former regime will result in only very few, very unlikely words unique to the test-set. However, generalization to unseen words is the essence of morphology. As an additional metric to measuring generalization in this scenario, we evaluate the generated output under Witten-Bell discounted character n-gram models trained on either the whole corpus or the test data only. Our baseline is a GRU cell and the standard RNN training procedure with teacher-forcing5. Hidden state size and embedding size are identical to our model’s. 5.2 Model parametrization We stick to a standard softmax observation model and instead focus the model design on different combinations of flows for Fg and Fq . We investigate the flow in Equation (10), denoted as TRIL, its diagonal version DIAG and a simple identity ID. We denote repeated application of (independently parametrized) flows as in 2 × TRIL. For the weighted version we use K ∈ {2, 5, 10} samples. In addition, for Fg we experiment with a sequence of Real NVPs with masking dimensions d = 2 . . . 7 (two internal hidden layers of size 8 each). Furthermore, we investigate deviating from the factorization (3) by using a bidirectional RNN conditioning on all w1...T in every timestep. Finally, for the best performing configuration, we also investigate state-sizes d = {16, 32}. 4Note that for both data-space metrics there is a trivial generation system that achieves a ‘perfect’ score. Hence, both must be taken into account at the same time to judge performance. 5It should be noted that despite the greatly reduced vocabulary in character-level generation, RNN training without teacher-forcing for our data still fails miserably. 5.3 Results Table 1 shows the result for the standard split. By ± we indicate mean and standard deviation across 5 or 10 (for IWAE) identical runs6. The data-space metrics require manually trading off precision and coverage. We observe that two layers of the TRIL flow improve performance. Furthermore, importance weighting significantly improves the results across all metrics with diminishing returns at K = 10. Its effectiveness is also confirmed by an increase in variance across the weights ω1 . . . ωT during training which can be attributed to the significance of the noise model (see 5.4 for more details). We found training with REAL-NVP to be very unstable. We attribute the relatively poor performance of NVP to the sequential VI setting which deviates heavily from what it was designed for and keep adaptions for future work. Model H[Ptrain, P̂ ] H[Ptest, P̂ ] w ∈ V unique w ∈ V Ī TRIL 12.13±.11 11.99±.11 0.18±.00 0.43±.03 0.95±.04 TRIL, K=2 11.76±.12 11.82±.12 0.16±.01 0.46±.02 1.06±.16 TRIL, K=5 11.46±.05 11.51±.05 0.16±.01 0.48±.02 1.08±.13 TRIL, K=10 11.43±.05 11.47±.05 0.16±.01 0.49±.02 1.12±.12 2×TRIL 11.91±.08 11.86±.13 0.17±.01 0.45±.02 0.89±.07 2×TRIL, K=2 11.55±.09 11.61±.09 0.16±.00 0.47±.01 1.00±.13 2×TRIL, K=5 11.42±.07 11.46±.06 0.16±.00 0.49±.01 1.20±.12 2×TRIL, K=10 11.33±.05 11.38±.06 0.16±.00 0.49±.01 1.28±.13 2×TRIL, K=10, BIDI 11.33±.09 11.39±.10 0.16±.01 0.48±.00 1.25±.16 d = 16 2×TRIL, K=10 11.21 11.43 0.15 0.48 1.43 d = 32 2×TRIL, K=10 11.27 11.13 0.15 0.50 1.31 REAL-NVP-[2,3,4,5,6,7] 11.77 11.81 0.12 0.53 0.94 BASELINE-8D 12.92 12.97 0.13 0.53 – BASELINE-16D 12.55 12.60 0.14 0.62 – ORACLE-TRAIN 7.0 7.027 0.27 1.0 – Table 1: Results on generation. The cross entropy is computed wrt. both training and test set. ORACLE-TRAIN is a model sampling from the training data. Interestingly, our standard inference model is on par with the equivalently parametrized bidirectional inference model suggesting that historic information can be sufficiently stored in the states and confirming d-separation as the right principle for inference design. The poor cross-entropy achieved by the baseline can partly be explained by the fact that autoregressive RNNs are trained on conditional next-word-predictions. Estimating the real data-space distribution would require aggregating over all possible sequences w ∈ V T . However, the data-space metrics clearly show that the performance cannot solely be attributed to this. Table 2 shows that generalization for the alternative split is indeed harder but cross entropy results carry over from the standard setting. Here we sample trajectories and extract the argmax from the observation model which resembles more closely the procedure of the baseline. Under n-gram perplexity both models are on par with a slight advantage of the baseline on longer n-grams and slightly better generalization of our proposed model. n-gram from train+test n-gram from test Model H[Ptrain, P̂ ] H[Ptest, P̂ ] P2 P3 P4 P5 P2 P3 P4 P5 2×TRIL, K=10 11.56 12.27 10.4 12.8 20.9 30.7 13.1 21.9 49.6 81.1 BASELINE-8D 12.90 13.67 11.4 12.1 17.5 24.8 14.5 22.7 48.3 80.5 ORACLE-TRAIN – – 10.1 6.7 4.8 4.1 13.2 15.7 21.4 26.4 ORACLE-TEST – – 9.5 6.0 4.5 3.9 7.9 4.1 2.9 2.6 Table 2: Results for the alternative data split: Cross entropy and perplexity under n = 2, 3, 4, 5-gram language models estimated on either the full corpus or the test set only. To give more insight into how the transition functions influence the results, Table 1a presents an exhaustive overview for all combinations of our simple flows. We observe that a powerful generative 6Single best model with d = 8: 2× TRIL,K = 10 achieved H[Ptrain, P̂ ] = 11.26 and H[Ptest, P̂ ] = 11.28. 7Note that the training-set oracle is not optimal for the test set. The entropy of the test set is 6.80. flow is essential for successful models while the inference flow can remain relatively simple – yet simplistic choices, such as ID degrade performance. Choosing Fg slightly more powerful than Fq emerges as a successful pattern. 5.4 Noise Model Analysis We use K = 20 samples to approximate the entropy terms in (13). In addition we denote by Ī the average mutual information across all time-steps. Figure 3 shows how Ī along with the symbolic entropy H[wt|ht] changes during training. Remember that in a non-autoregressive model, the latter corresponds to information that cannot be recovered in later timesteps. Over the course of the training, more and more information is driven by ξt and absorbed into states ht where it can be stored. Figures 1 and 1b show Ī for all trained models. In addition, Figure 3 shows a box-plot of I(t) for each t = 1 . . . T for the configuration 2×TRIL, K=10. As initial tokens are more important to remember, it should not come as a surprise that I(t) is largest first and decreases over time, yet with increased variance. 1 2 3 4 5 6 7 8 9 10 0 1 2 word position t = 1 . . . T bi ts Figure 2: Noise mutual information I(t) over sequence position t = 1 . . . T . 0 2 4 training time bi ts Ī H[wt|ht] baseline Figure 3: Entropy analysis over training time. For reference the dashed line indicates the overall word entropy of the trained baseline. 6 Conclusion In this paper we have shown how a deep state space model can be defined and trained with the help of variational flows. The recurrent mechanism is driven purely by a simple white noise process and does not require an autoregressive conditioning on previously generated symbols. In addition, we have shown how an importance-weighted conditioning mechanism integrated into the objective allows shifting stochastic complexity from the inference to the generative model. The result is a highly flexible framework for sequence generation with an extremely simple overall architecture, a measurable notion of latent information and no need for pre-training, annealing or auxiliary losses. We believe that pushing the boundaries of non-autoregressive modeling is key to understanding stochastic text generation and can open the door to related fields such as particle filtering [NLRB17, MLT+17].
1. How does the proposed model handle complex sequential tasks, such as audio or video modeling, where the sequences are long and high-dimensional? 2. Can the model still perform well without the autoregressive component in more complex tasks that require memory, such as LSTM units? 3. How does the usage of multiple samples from the generative model in the inference network impact the model's performance, compared to using a single sample or the prior mean? 4. What is the effect of using normalizing flows only in either the generative model or the inference network, rather than in both? 5. Can the authors provide further analysis of the novel components of their model, particularly the importance weighted ELBO and the use of normalizing flows?
Review
Review This paper introduces a probabilistic model for unconditional word generation that uses state space models whose distributions are parameterized with deep neural networks. Normalizing flows are used to define flexible distributions both in the generative model and in the inference network. To improve inference the inference networks uses samples from the prior SSM transitions borrowing ideas from importance-weighted autoencoders. I enjoyed reading this paper, as it gives many useful insights on deep state space models and more in general on probabilistic models for sequential data. Also, it introduces novel ways of parameterizing the inference network by constructing a variational approximation over the noise term rather than the state. I appreciated the usage of unconditional word generation as a simple example whose results are easier to interpret, but the validation of these results to more complex sequential tasks would make this a much stronger submission: - I agree on the discussions on the importance of not having the autoregressive component as most of the models in the literature. In this paper however you only focus on the "simple" sequential task of word generation, while in many cases (e.g. audio or video modelling) you have very long sequences and possibly very high dimensional observations. The natural question is then how do these results generalize to more complex tasks? Would you for example still be able to train a model with a higher dimensional state space without the autoregressive component, performing annealing or having auxiliary losses? - In more complex tasks you often need some memory in the model (e.g. LSTM units) which is why previous works on deep state space models combine RNNs with SSMs. How would you add memory to your model? Would your conclusions still hold in this case? Also, more in depth analysis of the novel components of your model would have been useful: - When defining the importance weighted ELBO, you use many different samples from the generative model as an input to the inference network. Previous work used for example just the prior mean in the inference network, instead of taking all the samples which is computationally intensive. How does the model perform if for example you only pass the prior mean instead of all the samples? - What happens if you only use normalizing flows in generative model/inference networks, and not in both? This is more commonly done in previous papers. Minor comments - The results on the set-based metrics and the ones in table 2 should be discussed more in depth - Is your citation style in line with the nips guidelines? Usually nips paper have numbered citations Typos: Line 6: built Line 281: samples
NIPS
Title Deep State Space Models for Unconditional Word Generation Abstract Autoregressive feedback is considered a necessity for successful unconditional text generation using stochastic sequence models. However, such feedback is known to introduce systematic biases into the training process and it obscures a principle of generation: committing to global information and forgetting local nuances. We show that a non-autoregressive deep state space model with a clear separation of global and local uncertainty can be built from only two ingredients: An independent noise source and a deterministic transition function. Recent advances on flowbased variational inference can be used to train an evidence lower-bound without resorting to annealing, auxiliary losses or similar measures. The result is a highly interpretable generative model on par with comparable auto-regressive models on the task of word generation. 1 Introduction Deep generative models for sequential data are an active field of research. Generation of text, in particular, remains a challenging and relevant area [HYX+17]. Recurrent neural networks (RNNs) are a common model class, and are typically trained via maximum likelihood [BVV+15] or adversarially [YZWY16, FGD18]. For conditional text generation, the sequence-to-sequence architecture of [SVL14] has proven to be an excellent starting point, leading to significant improvements across a range of tasks, including machine translation [BCB14, VSP+17], text summarization [RCW15], sentence compression [FAC+15] and dialogue systems [SSB+16]. Similarly, RNN language models have been used with success in speech recognition [MKB+10, GJ14]. In all these tasks, generation is conditioned on information that severely narrows down the set of likely sequences. The role of the model is then largely to distribute probability mass within relatively constrained sets of candidates. Our interest is, by contrast, in unconditional or free generation of text via RNNs. We take as point of departure the shortcomings of existing model architectures and training methodologies developed for conditional tasks. These arise from the increased challenges on both, accuracy and coverage. Generating grammatical and coherent text is considerably more difficult without reliance on an acoustic signal or a source sentence, which may constrain, if not determine much of the sentence structure. Moreover, failure to sufficiently capture the variety and variability of data may not surface in conditional tasks, yet is a key desideratum in unconditional text generation. The de facto standard model for text generation is based on the RNN architecture originally proposed by [Gra13] and incorporated as a decoder network in [SVL14]. It evolves a continuous state vector, emitting one symbol at a time, which is then fed back into the state evolution – a property that characterizes the broader class of autoregressive models. However, even in a conditional setting, these RNNs are difficult to train without substitution of previously generated words by ground truth observations during training, a technique generally referred to as teacher forcing [WZ89]. This approach is known to cause biases [RCAZ15a, GLZ+16] that can be detrimental to test time 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. performance, where such nudging is not available and where state trajectories can go astray, requiring ad hoc fixes like beam search [WR16] or scheduled sampling [BVJS15]. Nevertheless, teacher forcing has been carried over to unconditional generation [BVV+15]. Another drawback of autoregressive feedback [Gra13] is in the dual use of a single source of stochasticity. The probabilistic output selection has to account for the local variability in the next token distribution. In addition, it also has to inject a sufficient amount of entropy into the evolution of the state space sequence, which is otherwise deterministic. Such noise injection is known to compete with the explanatory power of autoregressive feedback mechanisms and may result in degenerate, near deterministic models [BVV+15]. As a consequence, there have been a variety of papers that propose deep stochastic state sequence models, which combine stochastic and deterministic dependencies, e.g. [CKD+15, FSPW16], or which make use of auxiliary latent variables [GSC+17], auxiliary losses [SATB17], and annealing schedules [BVV+15]. No canoncial architecture has emerged so far and it remains unclear how the stochasticity in these models can be interpreted and measured. In this paper, we propose a stochastic sequence model that preserves the Markov structure of standard state space models by cleanly separating the stochasticity in the state evolution, injected via a white noise process, from the randomness in the local token generation. We train our model using variational inference (VI) and build upon recent advances in normalizing flows [RM15, KSW16] to define rich enough stochastic state transition functions for both, generation and inference. Our main goal is to investigate the fundamental question of how far one can push such an approach in text generation, and to more deeply understand the role of stochasticity. For that reason, we have used the most basic problem of text generation as our testbed: word morphology, i.e. the mechanisms underlying the formation of words from characters. This enables us to empirically compare our model to autoregressive RNNs on several metrics that are intractable in more complex tasks such as word sequence modeling. 2 Model We argue that text generation is subject to two sorts of uncertainty: Uncertainty about plausible long-term continuations and uncertainty about the emission of the current token. The first reflects the entropy of all things considered “natural language", the second reflects symbolic entropy at a fixed position that arises from ambiguity, (near-)analogies, or a lack of contextual constraints. As a consequence, we cast the emission of a token as a fundamental trade-off between committing and forgetting about information. 2.1 State space model Let us define a state space model with transition function F : Rd × Rd → Rd, (ht, ξt) 7→ ht+1 = F (ht, ξt), ξt iid∼ N (0, I) . (1) F is deterministic, yet driven by a white noise process ξ, and, starting from some h0, defines a homogeneous stochastic process. A local observation model P (wt|ht) generates symbols wt ∈ Σ and is typically realized by a softmax layer with symbol embeddings. The marginal probability of a symbol sequence w = w1:T is obtained by integrating out h = h1:T , P (w) = ∫ T∏ t=1 p(ht|ht−1)P (wt|ht) dh . (2) Here p(ht|ht−1) is defined implicitly by driving F with noise as we will explain in more detail below.1In contrast to common RNN architectures, we have defined F to not include an auto-regressive input, such as wt−1, making potential biases as in teacher-forcing a non-issue. Furthermore, this implements our assumption about the role of entropy and information for generation. The information about the local outcome under P (wt|ht) is not considered in the transition to the next state as there is no feedback. Thus in this model, all entropy about possible sequence continuations must arise from the noise process ξ, which cannot be ignored in a successfully trained model. 1For ease of exposition, we assume fixed length sequences, although in practice one works with end-ofsequence tokens and variable length sequences. The implied generative procedure follows directly from the chain rule. To sample a sequence of observations we (i) sample a white noise sequence ξ = ξ1...T (ii) deterministically compute h = h1...T from h0 and ξ via F and (iii) sample from the observation model ∏T t=1 P (wt|ht). The remainder of this section focuses on how we can define a sufficiently powerful familiy of state evolution functions F and how variational inference can be used for training. 2.2 Variational inference Model-based variational inference (VI) allows us to approximate the marginalization in Eq. (2) by posterior expectations with regard to an inference model q(h|w). It is easy to verify that the true posterior obeys the conditional independences ht ⊥⊥ rest |ht−1,wt:T , which informs our design of the inference model, cf. [FSPW16]: q(h|w) = T∏ t=1 q(ht|ht−1,wt:T ) . (3) This is to say, the previous state is a sufficient summary of the past. Jensen’s inequality then directly implies the evidence lower bound (ELBO) logP (w) ≥ Eq [ logP (w|h) + log p(h) q(h|w) ] =: L = T∑ t=1 Lt (4) Lt := Eq [logP (wt|ht)] + Eq [ log p(ht|ht−1) q(ht|ht−1,wt:T ) ] (5) This is a well-known form, which highlights the per-step balance between prediction quality and the discrepancy between the transition probabilities of the unconditioned generative and the dataconditioned inference models [FSnPW16, CKD+15]. Intuitively, the inference model breaks down the long range dependencies and provides a local training signal to the generative model for a single step transition and a single output generation. Using VI successfully for generating symbol sequences requires parametrizing powerful yet tractable next state transitions. As a minimum requirement, forward sampling and log-likelihood computation need to be available. Extensions of VAEs [RM15, KSW16] have shown that for non-sequential models under certain conditions an invertible function h = f(ξ) can shape moderately complex distributions over ξ into highly complex ones over h, while still providing the operations necessary for efficient VI. The authors show that a bound similar to Eq. (5) can be obtained by using the law of the unconscious statistician [RM15] and a density transformation to express the discrepancy between generative and inference model in terms of ξ instead of h L = Eq(ξ|w) [ logP (w|f(ξ)) + log p(f(ξ)) q(ξ|w) + log |detJf (ξ)| ] (6) This allows the inference model to work with an implicit latent distribution at the price of computing the Jacobian determinant of f . Luckily, there are many choices such that this can be done in O(d) [RM15, DSB16]. 2.3 Training through coupled transition functions We propose to use two separate transition functions Fq and Fg for the inference and the generative model, respectively. Using results from flow-based VAEs we derive an ELBO that reveals the intrinsic coupling of both and expresses the relation of the two as a part of the objective that is determined solely by the data. A shared transition model Fq = Fg constitutes a special case. Two-Flow ELBO For a transition function F as in Eq. (1) fix h = h∗ and define the restriction f(ξ) = F (h, ξ)|h=h∗ . We require that for any h∗, f is a diffeomorphism and thus has a differentiable inverse. In fact, as we work with (possibly) different Fg and Fq for generation and inference, we have restrictions fg and fq, respectively. For better readability we will omit the conditioning variable h∗ in the sequel. By combining the per-step decomposition in (5) with the flow-based ELBO from (6), we get (implicitly setting h∗ = ht−1): Lt = Eq(ξ|w) [ logP (wt|fq(ξt)) + log p(fq(ξt)|ht−1) q(ξt|ht−1;wt:T ) + log ∣∣detJfq (ξt)∣∣] . (7) As our generative model also uses a flow to transform ξt into a distribition on ht, it is more natural to use the (simple) density in ξ-space. Performing another change of variable, this time on the density of the generative model, we get p(ht|ht−1) = p(ζt|ht−1) · |detJf−1g (fq(ξt))| = r(ζt) |detJfg (ζt)| , ζt := (f −1 g ◦ fq)(ξt) (8) where r now is simply the (multivariate) standard normal density as ξt does not depend ht−1, whereas ht does. We have introduced new noise variable ζt = s(ξt) to highlight the importance of the transformation s = f−1g ◦ fq, which is a combined flow of the forward inference flow and the inverse generative flow. Essentially, it follows the suggested ξ-distribution of the inference model into the latent state space and back into the noise space of the generative model with its uninformative distribution. Putting this back into Eq. (7) and exploiting the fact that the Jacobians can be combined via detJs = detJfq/detJfg we finally get Lt = Eq(ξ|w) [ logP (wt|fq(ξt)) + log r(s(ξt)) q(ξt|ht−1;wt:T ) + log |detJs(ξt)| ] . (9) Interpretation Naïvely employing the model-based ELBO approach, one has to learn two independently parametrized transition models p(ht|ht−1) and q(ht|ht−1, wt...T ), one informed about the future and one not. Matching the two then becomes and integral part of the objective. However, since the transition model encapsulates most of the model complexity, this introduces redundancy where the learning problem is most challenging. Nevertheless, generative and inference model do address the transition problem from very different angles. Therefore, forcing both to use the exact same transition model might limit flexibility during training and result in an inferior generative model. Thus our model casts Fg and Fq as independently parametrized functions that are coupled through the objective by treating them as proper transformations of an underlying white noise process. 2 Special cases Additive Gaussian noise ht+1 = ht + ξt can be seen as the simplest form of Fg or, alternatively, as a generative model without flow (as Jfg = I). Of course, repeated addition of noise does not provide a meaningful latent trajectory. Finally, note that for Fg = Fq , s = id and the nominator in the second term becomes a simple prior probability r(ξt), whereas the determinant reduces to a constant. We now explore possible candidates for the flows in Fg and Fq . 2.4 Families of transition functions Since the Jacobian of a composed function factorizes, a flow F is often composed of a chain of individual invertible functions F = Fk ◦ · · · ◦ F1 [RM15]. We experiment with individual functions F (ht−1, ξt) = g(ht−1) + G(ht−1)ξt (10) where g is a multilayer MLP Rd → Rd and G is a neural network Rd → Rd × Rd mapping ht−1 to a lower-triangular d × d matrix with non-zero diagonal entries. Again, we use MLPs for this mapping and clip the diagonal away from [−δ, δ] for some hyper parameter 0 < δ < 0.5. The lower-triangular structure allows computing the determinant in O(d) and stable inversion of the mapping by substitution inO(d2). As a special case we also consider the case when G is restricted to diagonal matrices. Finally, we experiment with a conditional variant of the Real NVP flow [DSB16]. Computing F−1g is central to our objective and we found that depending on the flow actually parametrizing the inverse directly results in more stable and efficient training. 2Note that identifying s as an invertible function allows us to perform a backwards density transformation which cancels the regularizing terms. This is akin to any flow objective (e.g. see equation (15) in[RM15]) where applying the transformation additionally to the prior cancels out the Jacobian term. We can think of s as a stochastic bottleneck with the observation model P (wt|ht) attached to the middle layer. Removing the middle layer collapses the bottleneck and prohibits learning compression. 2.5 Inference network So far we have only motivated the factorization of the inference network q(h|w) =∏ q(ht|ht−1, wt:T ) but treated it as a black-box otherwise. Remember that sampling from the inference network amounts to sampling ξt ∼ q(·|ht−1, wt...T ) and then performing the deterministic transition Fq(ht−1, ξt). We observe much better training stability when conditioning q on the data wt...T only and modeling interaction with ht−1 exclusively through Fq. This coincides with our intuition that the two inputs to a transition function provide semantically orthogonal contributions. We follow existing work [DSB16] and choose q as the density of a normal distribution with diagonal covariance matrix. We follow the idea of [FSPW16] and incorporate the variable-length sequence wt:T by conditioning on the state of an RNN running backwards in time across w1...T . We embed the symbols w1...T in a vector space RdE and use use a GRU cell to produce a sequence of hidden states aT , . . . ,a1 where at has digested tokens wt:T . Together ht−1 and at parametrize the mean and co-variance matrix of q. 2.6 Optimization Except in very specific and simple cases, for instance, a Kalman filter, it will not be possible to efficiently compute the q-expectations in Eq. (5) exactly. Instead, we sample q in every time-step as is common practice for sequential ELBOs [FSnPW16, GSC+17]. The re-parametrization trick allows pushing all necessary gradients through these expectations to optimize the bound via stochastic gradient-based optimization techniques such as Adam [KB14]. 2.7 Extension: Importance-weighted ELBO for tracking the generative model Conceptionally, there are two ways we can imagine an inference network to propose ξ1:T sequences for a given sentence w1:T . Either, as described above, by digesting w1...T right-to-left and proposing ξ1:T left-to-right. Or, by iteratively proposing a ξt taking into account the last state ht−1 proposed and the generative deterministic mechanism Fg. The latter allows the inference network to peek at states ht that Fg could generate from ht−1 before proposing an actual target ht. This allows the inference model to track a multi-modal Fg without need for Fq to match its expressiveness. As a consequence, this might offer the possibility to learn multi-modal generative models, without the need to employ complex multi-modal distributions in the inference model. Our extension is built on importance weighted auto-encoders (IWAE) [BGS15]. The IWAE ELBO is derived by writing the log marginal as a Monte Carlo estimate before using Jensen’s inequality. The result is an ELBO and corresponding gradients of the form3 L = Eh(k) [ log 1 K K∑ k=1 p(w,h(k)) q(h(k)|w)︸ ︷︷ ︸ =:ω(k) ] , ∇L = Eh(k) [ K∑ k=1 ω(k)∑ k′ ω (k′) ∇ logω(k) ] , h(k)∼ q(·|w) (11) The authors motivate (11) as a weighting mechanism relieving the inference model from explaining the data well with every sample. We will use the symmetry of this argument to let the inference model condition on potential next states hgt = Fg(ht−1, ξt), ξt∼N (0, I) from the generative model without requiring every hgt to allow q to make a good proposal. In other words, the K sampled outputs of Fg become a vectorized representation of Fg to condition on. In our sequential model, computing ω(k) exactly is intractable as it would require rolling out the network until time T . Instead, we limit the horizon to only one time-step. Although this biases the estimate of the weights and consequently the ELBO, longer horizons did empirically not show benefits. When proceeding to time-step t+ 1 we choose the new hidden state by sampling h(k) with probability proportionally to ω(k). Algorithm 1 summarizes the steps carried out at time t for a given ht−1 (to not overload the notation, we drop t in hgt) and a more detailed derivation of the bound is given in Appendix A. 3Here we have tacitly assumed that h can be rewritten using the reprametrization trick so that the expectation can be expressed with respect to some parameter-free base-distribution. See [BGS15] for a detailed derivation of the gradients in (11). Algorithm 1 Detailed forward pass with importance weighting Simulate Fg: h (k) g = Fg(ht−1, ξ (k)), where ξ(k) ∼ N (0, I), k = 1, . . . ,K Instantiate the inference family: qk(h) = q(h|h(k)g ,ht−1, wt:T ) Sample inference: h(k) ∼ qk Compute gradients as in (11) where ω(k) = P (wt|h(k))p(h(k)|ht−1)/qk(h(k)) Sample h(k) according to ω(1) . . . ω(K) for the next step. 3 Related Work Our work intersects with work directly addressing teacher-forcing, mostly on language modelling and translation (which are mostly not state space models) and stochastic state space models (which are typically autoregressive and do not address teacher forcing). Early work on addressing teacher-forcing has focused on mitigating its biases by adapting the RNN training procedure to partly rely on the model’s prediction during training [BVJS15, RCAZ15b]. Recently, the problem has been addressed for conditional generation within an adversarial framework [GLZ+16] and in various learning to search frameworks [WR16, LAOL17]. However, by design these models do not perform stochastic state transitions. There have been proposals for hybrid architectures that augment the deterministic RNN state sequences by chains of random variables [CKD+15, FSPW16]. However, these approaches are largely patching-up the output feedback mechanism to allow for better modeling of local correlations, leaving the deterministic skeleton of the RNN state sequence untouched. A recent evolution of deep stochastic sequence models has developed models of ever increasing complexity including intertwined stochastic and deterministic state sequences [CKD+15, FSPW16] additional auxiliary latent variables [GSC+17] auxiliary losses [SATB17] and annealing schedules [BVV+15]. At the same time, it remains often unclear how the stochasticity in these models can be interpreted and measured. Closest in spirit to our transition functions is work by Maximilian et al.[KSBvdS17] on generation with external control inputs. In contrast to us they use a simple mixture of linear transition functions and work around using density transformations akin to [BO14]. In our unconditional regime we found that relating the stochasticity in ξ explicitly to the stochasticity in h is key to successful training. Finally, variational conditioning mechanisms similar in spirit to ours have seen great success in image generation[GDGW15]. Among generative unconditional sequential models GANs are as of today the most prominent architecture [YZWY16, JKMHL16, FGD18, CLZ+17]. To the best of our knowledge, our model is the first non-autoregressive model for sequence generation in a maximum likelihood framework. 4 Evaluation Naturally, the quality of a generative model must be measured in terms of the quality of its outputs. However, we also put special emphasis on investigating whether the stochasticity inherent in our model operates as advertised. 4.1 Data Inspection Evaluating generative models of text is a field of ongoing research and currently used methods range from simple data-space statistics to expensive human evaluation [FGD18]. We argue that for morphology, and in particular non-autoregressive models, there is an interesting middle ground: Compared to the space of all sentences, the space of all words has still moderate cardinality which allows us to estimate the data distribution by unigram word-frequencies. As a consequence, we can reliably approximate the cross-entropy which naturally generalizes data-space metrics to probabilistic models and addresses both, over-generalization (assigning non-zero probability to non-existing words) and over-confidence (distributing high probability mass only among a few words). This metric can be addressed by all models which operate by first stochastically generating a sequence of hidden states and then defining a distribution over the data-space given the state sequence. For our model we approximate the marginal by a Monte Carlo estimate of (2) P (w) = ∫ P (w|h)p(h)dh = 1 K K∑ k=1 P (w|h(k)), h(k) ∼ p(h) (12) Note that sampling from p(h) boils down to sampling ξ1...T from independent standard normals and then applying Fg. In particular, the non-autoregressive property of our model allows us to estimate all words in some set S using K samples each by using only K independent trajectories h overall. Finally, we include two data-space metrics as an intuitive, yet less accurate measure. From a collection of generated words, we estimate (i) the fraction of words that are in the training vocabulary (w ∈ V ) and (ii) the fraction of unique words that are in the training vocabulary (w ∈ V unique).4 4.2 Entropy Inspection We want to go beyond the usual evaluation of existing work on stochastic sequence models and also assess the quality of our noise model. In particular, we are interested in how much information contained in a state ht about the output P (wt|ht) is due to the corresponding noise vector ξt. This is quantified by the mutual information between the noise ξt and the observation wt given the noise ξ1:t−1 that defined the prefix up to time t. Since ht−1 is a deterministic function of ξ1:t−1, we write I(t) = I(wt; ξt|ht−1) = Eht−1 [ H[wt|ht−1]−H[wt|ξt,ht−1] ] ≥ 0 (13) to quantify the dependence between noise and observation at one time-step. For a model ignoring the noise variables, knowledge of ξt does not reduce the uncertainty about wt, so that I(t) = 0. We can use Monte Carlo estimates for all expectations in (13). 5 Experiments 5.1 Dataset and baseline For our experiments, we use the BooksCorpus [KZS+15, ZKZ+15], a freely available collection of novels comprising of almost 1B tokens out of which 1.3M are unique. To filter out artefacts and some very uncommon words found in fiction, we restrict the vocabulary to words of length 2 ≤ l ≤ 12 with at least 10 occurrences that only contain letters resulting in a 143K vocabulary. Besides the standard 10% test-train split at the word level, we also perform a second, alternative split at the vocabulary level. That means, 10 percent of the words, chosen regardless of their frequency, will be unique to the test set. This is motivated by the fact that even a small test-set under the former regime will result in only very few, very unlikely words unique to the test-set. However, generalization to unseen words is the essence of morphology. As an additional metric to measuring generalization in this scenario, we evaluate the generated output under Witten-Bell discounted character n-gram models trained on either the whole corpus or the test data only. Our baseline is a GRU cell and the standard RNN training procedure with teacher-forcing5. Hidden state size and embedding size are identical to our model’s. 5.2 Model parametrization We stick to a standard softmax observation model and instead focus the model design on different combinations of flows for Fg and Fq . We investigate the flow in Equation (10), denoted as TRIL, its diagonal version DIAG and a simple identity ID. We denote repeated application of (independently parametrized) flows as in 2 × TRIL. For the weighted version we use K ∈ {2, 5, 10} samples. In addition, for Fg we experiment with a sequence of Real NVPs with masking dimensions d = 2 . . . 7 (two internal hidden layers of size 8 each). Furthermore, we investigate deviating from the factorization (3) by using a bidirectional RNN conditioning on all w1...T in every timestep. Finally, for the best performing configuration, we also investigate state-sizes d = {16, 32}. 4Note that for both data-space metrics there is a trivial generation system that achieves a ‘perfect’ score. Hence, both must be taken into account at the same time to judge performance. 5It should be noted that despite the greatly reduced vocabulary in character-level generation, RNN training without teacher-forcing for our data still fails miserably. 5.3 Results Table 1 shows the result for the standard split. By ± we indicate mean and standard deviation across 5 or 10 (for IWAE) identical runs6. The data-space metrics require manually trading off precision and coverage. We observe that two layers of the TRIL flow improve performance. Furthermore, importance weighting significantly improves the results across all metrics with diminishing returns at K = 10. Its effectiveness is also confirmed by an increase in variance across the weights ω1 . . . ωT during training which can be attributed to the significance of the noise model (see 5.4 for more details). We found training with REAL-NVP to be very unstable. We attribute the relatively poor performance of NVP to the sequential VI setting which deviates heavily from what it was designed for and keep adaptions for future work. Model H[Ptrain, P̂ ] H[Ptest, P̂ ] w ∈ V unique w ∈ V Ī TRIL 12.13±.11 11.99±.11 0.18±.00 0.43±.03 0.95±.04 TRIL, K=2 11.76±.12 11.82±.12 0.16±.01 0.46±.02 1.06±.16 TRIL, K=5 11.46±.05 11.51±.05 0.16±.01 0.48±.02 1.08±.13 TRIL, K=10 11.43±.05 11.47±.05 0.16±.01 0.49±.02 1.12±.12 2×TRIL 11.91±.08 11.86±.13 0.17±.01 0.45±.02 0.89±.07 2×TRIL, K=2 11.55±.09 11.61±.09 0.16±.00 0.47±.01 1.00±.13 2×TRIL, K=5 11.42±.07 11.46±.06 0.16±.00 0.49±.01 1.20±.12 2×TRIL, K=10 11.33±.05 11.38±.06 0.16±.00 0.49±.01 1.28±.13 2×TRIL, K=10, BIDI 11.33±.09 11.39±.10 0.16±.01 0.48±.00 1.25±.16 d = 16 2×TRIL, K=10 11.21 11.43 0.15 0.48 1.43 d = 32 2×TRIL, K=10 11.27 11.13 0.15 0.50 1.31 REAL-NVP-[2,3,4,5,6,7] 11.77 11.81 0.12 0.53 0.94 BASELINE-8D 12.92 12.97 0.13 0.53 – BASELINE-16D 12.55 12.60 0.14 0.62 – ORACLE-TRAIN 7.0 7.027 0.27 1.0 – Table 1: Results on generation. The cross entropy is computed wrt. both training and test set. ORACLE-TRAIN is a model sampling from the training data. Interestingly, our standard inference model is on par with the equivalently parametrized bidirectional inference model suggesting that historic information can be sufficiently stored in the states and confirming d-separation as the right principle for inference design. The poor cross-entropy achieved by the baseline can partly be explained by the fact that autoregressive RNNs are trained on conditional next-word-predictions. Estimating the real data-space distribution would require aggregating over all possible sequences w ∈ V T . However, the data-space metrics clearly show that the performance cannot solely be attributed to this. Table 2 shows that generalization for the alternative split is indeed harder but cross entropy results carry over from the standard setting. Here we sample trajectories and extract the argmax from the observation model which resembles more closely the procedure of the baseline. Under n-gram perplexity both models are on par with a slight advantage of the baseline on longer n-grams and slightly better generalization of our proposed model. n-gram from train+test n-gram from test Model H[Ptrain, P̂ ] H[Ptest, P̂ ] P2 P3 P4 P5 P2 P3 P4 P5 2×TRIL, K=10 11.56 12.27 10.4 12.8 20.9 30.7 13.1 21.9 49.6 81.1 BASELINE-8D 12.90 13.67 11.4 12.1 17.5 24.8 14.5 22.7 48.3 80.5 ORACLE-TRAIN – – 10.1 6.7 4.8 4.1 13.2 15.7 21.4 26.4 ORACLE-TEST – – 9.5 6.0 4.5 3.9 7.9 4.1 2.9 2.6 Table 2: Results for the alternative data split: Cross entropy and perplexity under n = 2, 3, 4, 5-gram language models estimated on either the full corpus or the test set only. To give more insight into how the transition functions influence the results, Table 1a presents an exhaustive overview for all combinations of our simple flows. We observe that a powerful generative 6Single best model with d = 8: 2× TRIL,K = 10 achieved H[Ptrain, P̂ ] = 11.26 and H[Ptest, P̂ ] = 11.28. 7Note that the training-set oracle is not optimal for the test set. The entropy of the test set is 6.80. flow is essential for successful models while the inference flow can remain relatively simple – yet simplistic choices, such as ID degrade performance. Choosing Fg slightly more powerful than Fq emerges as a successful pattern. 5.4 Noise Model Analysis We use K = 20 samples to approximate the entropy terms in (13). In addition we denote by Ī the average mutual information across all time-steps. Figure 3 shows how Ī along with the symbolic entropy H[wt|ht] changes during training. Remember that in a non-autoregressive model, the latter corresponds to information that cannot be recovered in later timesteps. Over the course of the training, more and more information is driven by ξt and absorbed into states ht where it can be stored. Figures 1 and 1b show Ī for all trained models. In addition, Figure 3 shows a box-plot of I(t) for each t = 1 . . . T for the configuration 2×TRIL, K=10. As initial tokens are more important to remember, it should not come as a surprise that I(t) is largest first and decreases over time, yet with increased variance. 1 2 3 4 5 6 7 8 9 10 0 1 2 word position t = 1 . . . T bi ts Figure 2: Noise mutual information I(t) over sequence position t = 1 . . . T . 0 2 4 training time bi ts Ī H[wt|ht] baseline Figure 3: Entropy analysis over training time. For reference the dashed line indicates the overall word entropy of the trained baseline. 6 Conclusion In this paper we have shown how a deep state space model can be defined and trained with the help of variational flows. The recurrent mechanism is driven purely by a simple white noise process and does not require an autoregressive conditioning on previously generated symbols. In addition, we have shown how an importance-weighted conditioning mechanism integrated into the objective allows shifting stochastic complexity from the inference to the generative model. The result is a highly flexible framework for sequence generation with an extremely simple overall architecture, a measurable notion of latent information and no need for pre-training, annealing or auxiliary losses. We believe that pushing the boundaries of non-autoregressive modeling is key to understanding stochastic text generation and can open the door to related fields such as particle filtering [NLRB17, MLT+17].
1. What is the main contribution of the paper, and how does it differ from other sequence generation models? 2. What are the strengths and weaknesses of the proposed model, particularly in its experimental setup and applications? 3. How does the reviewer assess the clarity and significance of the paper's content, and what suggestions do they have for improvement? 4. What are the limitations of the proposed model, and how does it compare to other methods in the field? 5. How does the reviewer evaluate the novelty and impact of the paper's contributions, and what potential directions for future research do they suggest?
Review
Review Summary This paper proposes a state space model for sequence generation, where the generative model is non-autoregressive but the (variational) inference network can condition state predictions on the observed sequence. The model is evaluated on character-based lexicon generation (generating words independent of any context). Quality The paper proposes a novel, principled method for learning a non-autoregressive sequence model. I am not familiar enough with flow-based VAE to verify the model derivation, but it seems reasonable. My main concerns are with the experimental setup: Generating words in isolation is a relatively restricted task, and I don't see why the model should not be applied to (word-level) sentence generation, even if at first that mainly points out limitations of the model. The train/test split in which the test set words are completely disjoint from the training words makes more sense than the one where the same words can appear in the train and test data, so I would like to see more of a comparison between the different models there. Clarity Section 2 focuses on how the model is trained, but I would like to see a more algorithmic explanations of how the method is applied to sequence generation, for either pure sampling or inference given an observed sequence. In Algorithm 1 there are two hidden state values, h^k_g and h^k - it is not clear why, or how q_k() is defined. Significance This is an interesting model, and the proposed model obtains lower cross-entropy than a comparable baseline. I am inclined to recommend acceptance. However it is not entirely clear to me what the proposed model's advantages are, or what the results tell us about the problem being studied, so I would like to see clearer motivations and analysis. It seems that for unconstrained sampling from the model it is purely autoregressive, but during inference given an observed sequence the inferred noise variable is supposed to capture part of the variance which could otherwise have been captured by an auto-regressive model. Does this mean the we need, or don't need, the autoregressive property for the problem studied? --- Thanks for clarifying the motivation.
NIPS
Title TriBERT: Human-centric Audio-visual Representation Learning Abstract The recent success of transformer models in language, such as BERT, has motivated the use of such architectures for multi-modal feature learning and tasks. However, most multi-modal variants (e.g., ViLBERT) have limited themselves to visuallinguistic data. Relatively few have explored its use in audio-visual modalities, and none, to our knowledge, illustrate them in the context of granular audio-visual detection or segmentation tasks such as sound source separation and localization. In this work, we introduce TriBERT – a transformer-based architecture, inspired by ViLBERT, which enables contextual feature learning across three modalities: vision, pose, and audio, with the use of flexible co-attention. The use of pose keypoints is inspired by recent works that illustrate that such representations can significantly boost performance in many audio-visual scenarios where often one or more persons are responsible for the sound explicitly (e.g., talking) or implicitly (e.g., sound produced as a function of human manipulating an object). From a technical perspective, as part of the TriBERT architecture, we introduce a learned visual tokenization scheme based on spatial attention and leverage weak-supervision to allow granular cross-modal interactions for visual and pose modalities. Further, we supplement learning with sound-source separation loss formulated across all three streams. We pre-train our model on the large MUSIC21 dataset and demonstrate improved performance in audio-visual sound source separation on that dataset as well as other datasets through fine-tuning. In addition, we show that the learned TriBERT representations are generic and significantly improve performance on other audio-visual tasks such as cross-modal audio-visual-pose retrieval by as much as 66.7% in top-1 accuracy. N/A The recent success of transformer models in language, such as BERT, has motivated the use of such architectures for multi-modal feature learning and tasks. However, most multi-modal variants (e.g., ViLBERT) have limited themselves to visuallinguistic data. Relatively few have explored its use in audio-visual modalities, and none, to our knowledge, illustrate them in the context of granular audio-visual detection or segmentation tasks such as sound source separation and localization. In this work, we introduce TriBERT – a transformer-based architecture, inspired by ViLBERT, which enables contextual feature learning across three modalities: vision, pose, and audio, with the use of flexible co-attention. The use of pose keypoints is inspired by recent works that illustrate that such representations can significantly boost performance in many audio-visual scenarios where often one or more persons are responsible for the sound explicitly (e.g., talking) or implicitly (e.g., sound produced as a function of human manipulating an object). From a technical perspective, as part of the TriBERT architecture, we introduce a learned visual tokenization scheme based on spatial attention and leverage weak-supervision to allow granular cross-modal interactions for visual and pose modalities. Further, we supplement learning with sound-source separation loss formulated across all three streams. We pre-train our model on the large MUSIC21 dataset and demonstrate improved performance in audio-visual sound source separation on that dataset as well as other datasets through fine-tuning. In addition, we show that the learned TriBERT representations are generic and significantly improve performance on other audio-visual tasks such as cross-modal audio-visual-pose retrieval by as much as 66.7% in top-1 accuracy. 1 Introduction Multi-modal audio-visual learning [57], which explores and leverages the relationship between visual and auditory modalities, has started to emerge as an important sub-field of machine learning and computer vision. Examples of typical tasks include: audio-visual separation and localization, where the goal is to segment sounds produced by individual objects in an audio and/or to localize those objects in a visual scene [15, 16, 42, 55]; and audio-visual correspondence, where the goal is often audio-visual retrieval [23, 47, 53]. Notably, some of the most recent audio-visual methods [15] leverage human pose keypoints, or landmarks, as an intermediate or contextual representation. This tends to improve the overall performance of sound separation, as pose and motion are important cues for characterising both the type of instrument being played and, potentially, over time, the rhythm of the individual piece [15]. It can also serve as an intermediate representation when generating video from acoustic signals [8, 44] for example. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Most of the existing architectures tend to extract features from the necessary modalities using pre-trained backbones (e.g., CNNs applied to video frames [55], object regions [16], and audio spectrograms; and/or graph CNN for human pose [15]) and then construct problem-specific architectures that often utilize simple late fusion for cross-modal integration in decoding (e.g., to produce spectrogram masks [15, 16, 55]). This is contrary to current trends in other multi-modal problem domains, where over the past few years, approaches have largely consolidated around generic multi-modal feature learning architectures that are task agnostic to produce contextualized feature representations and then fine-tune those representations to a variety of tasks (e.g., visual question answering (VQA) or reasoning (VCR)) and datasets. Examples of such architectures include ViLBERT [33], VL-BERT [46], and Unicoder-VL [31], all designed specifically for visual-linguistic tasks. Audio-visual representation learning has, in comparison, received much less attention. Most prior works [51] assume a single sound source per video and rely on audio-visual alignment objectives. Exceptions include [39], which relies on proposal mechanisms and multiple-instance learning [49] or co-clustering [25]. These approaches tend to integrate multi-modal features extracted using pre-trained feature extractors (e.g., CNNs) at a somewhat shallow level. The very recent variants [6, 28, 35] leverage transformers for audio-visual representation learning through simple classification [6] and self-supervised [28] or contrastive [35] learning objectives while only illustrating performance on video-level audio-visual action classification. To the best of our knowledge, no audio-visual representation learning approach to date has explored pose as one of the constituent modalities; nor has shown that feature integration and contextualization at a hierarchy of levels, as is the case for BERT-like architectures, can lead to improvements on granular audio-visual tasks such as audio-visual sound source separation. To address the aforementioned limitations, we formulate a human-centric audio-visual representation learning architecture, inspired by ViLBERT [33] and other transformer-based designs, with an explicit goal of improving the state-of-the-art in audio-visual sound source separation. Our transformer model takes three streams of information: video, audio, and (pose) keypoints and co-attends among those three modalities to arrive at enriched representations that can then be used for the final audiovisual sound separation task. We illustrate that these representations are general and also improve performance on other auxiliary tasks (e.g., forms of cross-modal audio-visual-pose retrieval). From a technical perspective, unlike ViLBERT and others, our model does not rely on global frame-wise features nor an external proposal mechanism. Instead, we leverage a learned attention to form visual tokens, akin to [42], and leverage weakly supervised objectives that avoid single sound-source assumptions for learning. In addition, we introduce spectrogram mask prediction as one of our pre-training tasks to enable the network to better learn task-specific contextualized features. Contributions: Foremost, we introduce a tri-modal VilBERT-inspired model, which we call TriBERT, that co-attends among visual, pose keypoint, and audio modalities to produce highly contextualized representations. We show that these representations, obtained by optimizing the model with respect to uni-modal (weakly-supervised) classification and sound separation pretraining objectives, produce features that improve audio-visual sound source separation at large and also work well on other downstream tasks. Further, to avoid reliance on the image proposal mechanisms, we formulate tokenization in the image stream in terms of learned attentional pooling, which is learned jointly. This alleviates the need for externally trained detection mechanisms, such as Faster R-CNN and variants. We illustrate competitive performance on a number of granular audio-visual tasks both by using the TriBERT model directly, using it as a feature extractor, or by fine-tuning it. 2 Related works Audio-visual Tasks. There exists a close relationship between visual scenes and the sounds that they produce. This relationship has been leveraged to complete various audio-visual tasks. Based on [57]’s survey of audio-visual deep learning, these tasks can be categorized into four subfields, three of which are addressed in this paper and described in the following three subsections. Audio-visual Sound Source Separation and Localization. Sound source separation and the related task of sound source localization have been studied quite extensively. Previous works studying separation, also known as the cocktail party problem [19], leverage multi-modal audio-visual information [11, 14] to help improve performance with respect to their audio-only counterparts [26, 34]. Examples include learning correlations between optical flow and masked frequencies [9, 13], using graphical models [21], detecting salient motion signals that correspond to audio events [30, 40], and extracting pose keypoints to model human movements [15]. A close connection between separation and localization has also been illustrated [40, 43, 55, 56]. For example, [16, 42] both formulate the task as one of auditory and visual co-segmentation, either with pre-trained object regions obtained by the detector [16] or directly from the image [42]. All of these approaches contain highly specialized architectures with custom fusion schemes. We aim to leverage the flexibility of transformer models to create generalized multi-modal representations that improve on audio-visual tasks. Audio-visual Representation Learning. The goal is typically to learn aligned representations. The quality of these representations has been shown to greatly impact the overall performance of tasks downstream [4]. A common strategy for representation learning is to introduce a proxy task. In the audio-visual space, past works [1, 2, 38] have trained networks by having them watch and listen to a large amount of unlabeled videos containing both positive samples of matching audio and visual pairs and negative samples of mismatched pairs; the proxy task is binary classification of whether or not the audio and visual match each other. Other proxy tasks include determining whether or not an audio-visual pair is time synchronized [27]; and [29] uses a classification task to identify the correct visual clip or audio stream from a set with negative samples. However, these works rely on the assumption that only one main sound source occurs at a time and everything else is background noise. Our model uses a weakly supervised proxy objective to learn representations for multiple sources of sound (two in experiments) occurring simultaneously and also learns to incorporate pose features. Audio-visual Correspondence Learning. One of the fundamental tasks in correspondence learning related to our work is cross-modality retrieval. Most prior works focus on audio-visual retrieval [24, 36, 48] and propose learning a joint embedding space where both modalities can be mapped to. In this space, semantically related embeddings are close to each other and thus retrieval can be performed by selecting the closest embedding to the query from the alternate modality. In our work, we demonstrate that enhanced feature representations obtained by our pretrained model capture aligned semantics and lead to much better cross-modal retrieval than baseline representations. Visiolinguistic Representation Learning. Our model is inspired by the recent successes of visiolinguistic representations. Most such approaches leverage a combination of uni-modal and cross-modal transformer modules to pre-train generic visiolinguistic representations on masked language and/or multi-modal alignment tasks. For example, [33] proposes separate streams for each modality that communicate with each other through co-attention, while [46] uses a single-stream model that takes both visual and linguistic embeddings as input. In our work, we also leverage co-attention modules to learn joint representations between audio, pose, and vision modalities. However, in addition to extending co-attention, we also focus on reformulating image tokenization and demonstrate the ability to learn with weakly-supervised classification objectives as opposed to masked token predictions. 3 Approach We introduce TriBERT, a network that learns a joint representation of three modalities: vision, pose, and audio. We briefly review ViLBERT, the architecture that inspired TriBERT, in Section 3.1. We then describe our TriBERT architecture in Section 3.2, including pretraining tasks and objectives. 3.1 Reviewing Vision-and-Language BERT (ViLBERT) Motivated by the recent success of the BERT architecture for transfer learning in language modeling, Lu et al. [33] proposed ViLBERT to represent text and visual content jointly. ViLBERT is a twostream model for image regions and text segments. Each stream is similar to the BERT architecture, containing a series of transformer blocks (TRM) [50]. Given an image I with corresponding regions-of-interest (RoIs) or bounding boxes v0, v1, ...vN and an input sentence S with word tokens w0, w1, ...wT , the final output representations are hv0, hv1, ..., hvN and hw0, hw1, ..., hwT for the visual and linguistic features, respectively. To exchange information between the two modalities, the authors introduced a co-attentional transformer layer which computes query (Q), key (K), and value (V ) pairs like a standard transformer block. The keys and values from each modality are then fed to the multi-headed attention block of the other modality. The attention block in each stream generates attention-pooled features conditioned on the other modality and outputs a multi-modal joint representation which outperforms single-stream models across multiple vision-and-language tasks. 3.2 TriBERT Architecture The architecture of our proposed TriBERT network is illustrated in Figure 1. Inspired by the recent success of ViLBERT in the vision-and-language domain, we modify its architecture to a three-stream network for vision, pose, and audio. Similar to ViLBERT [33], we use a bi-directional Transformer encoder [50] as the backbone network. However, TriBERT also introduces integral components that differentiate its architectural design. First, instead of using bounding box visual features generated by a pre-trained object detector [33] or CNN feature columns [7], TriBERT uses a jointly trained weakly supervised visual segmentation network. Our end-to-end segmentation network takes a sequence of consecutive frames to detect and segment objects, and the corresponding features are pooled and fed as tokens to the visual stream. Second, the pose tokens are characterized by per-person keypoints encoded using a Graph CNN, and the audio token is produced by the VGGish Network [22] applied to an audio spectogram. All three types of tokens form the input to TriBERT, which refines them using tri-modal co-attention to arrive at the final multi-modal representations. Training TriBERT requires the definition of proxy/pretraining tasks and the corresponding losses (see Section 3.2.1). Specifically, while we adopt token masking used in ViLBERT and others, we are unable to define classification targets per token in our visual and pose streams. This is because we only assume per-video labels (e.g., of instruments played) and no access to how those map to attended sounding regions or person instances involved. To address this, we introduce weakly-supervised classification losses for those two streams. Since only one global audio representation is used, this is unnecessary in the audio stream and standard cross-entropy classification can be employed. Finally, motivated by recent works that show that multi-task pretraining is beneficial for ViLBERT [32], we introduce an additional spectrogram mask prediction pretraining task which predicts spectrogram masks for each individual audio source from the input spectrogram (bottom block, Figure 1). Visual Representations. Unlike [33], we consider input video frames instead of detected object/bounding box features as our visual input and propose an end-to-end approach to detect and segment objects from each individual frame. Figure 2 illustrates our visual segmentation network which takes in RGB frames as input. To extract global features, we use ResNet50 [20] as the backbone network followed by a 3 × 3 convolution to generate H × W visual spatial features which are then fed into the segmentation network. Following [54], we use a decou- pled spatial neural attention structure to detect and localize discriminative objects simultaneously. The attention network has two branches: (1) Expansive attention detector, which aims to detect object regions and generate the expansive attention map SE ∈ RC×H×W (top branch of Figure 2); and (2) Discriminative attention detector, which aims to predict discriminative regions and generate the discriminative attention map SD ∈ RC×H×W (bottom branch of Figure 2). The expansive attention detector contains a drop-out layer followed by a 1 × 1 convolution, another drop-out layer, a non-linear activation, and a spatial-normalization layer, defined as follows: λc(i,j) = F (W T c Ve(:, i, j) + b c), (1) αc(i,j) = λc(i,j)∑H i ∑W j λ c (i,j) , (2) where c ∈ C and F (·) denote number of classes and the non-linear activation function, respectively. The final attention map (Am) is generated as : Am = SE SD, where denotes element-wise multiplication. A spatial average pooling is applied on Am to generate a classification score for each corresponding class and pooled-out top two class features from spatial-visual feature (Ve). The resultant 3× 2× 1024 visual embeddings are used to train our proposed TriBERT architecture, where 3 corresponds to the number of frames and 2 to the number of "objects" per frame. Keypoint (pose) Representations. Our goal is to capture human body and finger movement through keypoint representations. Therefore, we extract 26 keypoints for body joints and 21 keypoints for each hand using the AlphaPose toolbox [12]. As a result, we identify the 2D (x, y) coordinates and corresponding confidence scores of 68 body joints. Following [15], we use Graph CNN to generate semantic context comprising of those joints. Similar to prior work [52] on action recognition, we construct a Spatial-Temporal Graph Convolutional Network G = {V,E} where each node vi ∈ {V } corresponds to the body joint’s keypoint and each edge ei ∈ {E} the natural connectivity between those keypoints. We use 2D coordinates of the detected body joints with confidence scores as input to each node and construct a spatial-temporal graph by: (1) connecting human body joints within a single frame according to body structure; and (2) connecting each joint with the same joint from the consecutive frames. This way, multiple layers of spatial-temporal graph convolutions are constructed to generate higher-level features for human keypoints. We use publicly available code1 to re-train their model on our dataset and extract body joint features of size 2 × 256 × 68 before the final classification layer (corresponding to two person instances). We apply a linear layer to transform these to 3× 2× 1024 input embeddings for pose BERT where 3 corresponds to the number of visual frames and 2 to maximum number of persons per frame. Audio Representations. Consistent with prior works, we use a time-frequency representation of the input audio. We apply STFT [18] to generate the corresponding spectrogram and then transform the magnitudes of the spectrogram into the log-frequency scale for further processing. The size of the final input audio spectrogram is 1× 256× 256 and is used in two ways: (1) as an audio embedding for audio BERT; and (2) as the input audio for attention U-net for the task of sound source separation, which predicts individual audio spectrogram masks (see Figure 1). Before passing to audio BERT, we use a VGGish Network [22] to extract global features for input audio embedding. 1https://github.com/yysijie/st-gcn Tri-modal Co-attention. Recent works [3, 33] propose co-attentional transformer layers to generate effective representations of vision conditioned on language and vice versa. Multi-head attention Add & Norm Add & Norm Feed Forward Visual HV (i+1) HV (i) Multi-head attention Add & Norm Add & Norm Feed Forward Pose HP (j+1) HP (j) Multi-head attention Add & Norm Add & Norm Feed Forward Audio HA (k+1) HA (k) QV K(P,A) V(P,A) QP K(V,A) V(V,A) QA K(P,V) V(P,V) In this paper, we introduce a tri-modal coattentional layer, illustrated on the right, by extending ViLBERT’s co-attentional transformer layers [33]. Given intermediate representations for vision, pose, and audio, denoted as HV (i), HP (j), and HA(k), respectively, each stream computes individual query (Q), key (K), and value (V ) matrices. The keys and values from two modalities are then concatenated together and fed as input to the multi-head attention block of the third modality. As a result, the block generates attention features conditioned on the other two modalities. We keep the rest of the architecture, such as the feed forward layers, residual connections, etc. the same as a standard transformer block, which is then used to generate effective multi-modal features. 3.2.1 Training Tasks We pre-train TriBERT jointly on two tasks: instrument classification and sound source separation. Our proposed architecture has three separate streams and each stream performs an individual classification task. To train our TriBERT model, we use the MUSIC21 dataset [56], which contains 21 instruments. Weakly-supervised Visual and Pose Classification. Our visual segmentation network generates attention features for input video frames. We then apply a spatial pooling, and the resulting feature vector is fed into the visual BERT. We use a special <SOS> token at the beginning of the input frame sequence to represent the entire visual input. Following [33], we apply masking to approximately 15% of the input image regions (see Figure 1). The output of the visual BERT is a sequence of hidden representations hv0, hv1, ..hvN conditioned on the pose and audio modalities. We use mean pooling of all hidden representations to perform classification for the detected objects. Similarly, pose BERT generates a sequence of hidden representations hp0, hp1, ..hpN conditioned on the visual and audio modalities, and we apply classification based on the mean pooling of all hidden states. Due to the lack of instance annotations, we cannot use region/pose level supervision. Following [5], we use a weakly-supervised approach to perform region selection and classification. Audio Classification. Since we do not have a sequence of audio embeddings, we artificially create an audio sequence for computational convenience by repeating the VGGish audio feature to generate a sequence of hidden representations ha0, ha1, ..haN conditioned on the visual and pose modalities. This is done purely for engineering convenience to allow consistent use of tri-modal co-attention across modalities. We then apply audio classification on the mean feature of all hidden representations. Multi-modal Sound Source Separation. We consider sound source separation as one of our initial tasks and follow the "Mix-and-Separate" framework [11, 16, 17, 38, 55], a well-known approach to solve this problem. The goal is to mix multiple audio signals to generate an artificially complex auditory representation and then learn to separate individual sounds from the mixture. Given two input videos V1 and V2 with accompanying audio A1(t) and A2(t), we mix A1 and A2 to generate a complex audio signal mixture Am(t) = A1(t) +A2(t). Suppose V1 has two objects o1′ and o1′′ with accompanying audio a1′ and a1′′ while V2 has one object o2′ with audio a2′. The goal is to separate sounds a1′, a1′′, and a2′ from the mixture Am(t) by predicting spectrogram masks using attention U-net [37], which takes in the mixed spectrogram as input. Attention U-net contains 7 convolutions and 7 de-convolutions with skip connections. The skip connections use attention gates (AG) comprise simple additive soft attentions to highlight relevant regions of the audio spectrograms. The overhead of attention U-Net over U-Net is fairly minimal. Specifically, in terms of the number of parameters, attention U-Net contains a modest 9% more parameters as compared to U-Net and the inference speed is only 7% slower [37]. The attention U-net outputs the final magnitude of the spectrogram mask (bottom branch in Figure 1) guided by audio-visual-pose features. Following [15], we adopt a self-attention based early fusion between the bottle-neck of attention U-net with the fused features (i.e. concatenation of features) corresponding to the <SOS> tokens of three BERT streams. We combine the predicted magnitude of the spectrogram mask from attention U-net with the phase of the input spectrogram and then use inverse STFT [18] to get back the wave-form of the prediction. Training Objective. We consider weakly-supervised classification for the visual and pose modalities. Following [5], we use two data streams from the hidden state of each modality. The first stream corresponds to a class score (βclass) for each individual region to perform recognition. This is achieved by a linear layer followed by a softmax operation (see Eq. 3). The second stream computes a probability distribution (βdet) for performing a proxy detection. This is done by using another linear layer followed by another softmax operation (see Eq.4) as follows: βclass(h c)ij = eh c ij∑C t=1 e hctj , (3) βdet(hd)ij = eh d ij∑|R| t=1 e hdtj , (4) where hc ∈ RC×|R|, hd ∈ RC×|R| and C denotes the number of classes. We then aggregate the recognition and detection scores to predict the class of all image regions as follows: βR = βclass(h c) βdet(hd), where denotes an element-wise product of the two scoring metrics. Finally we apply BCE-loss [10] to train visual and pose BERT. For audio classification, we consider a classification layer to predict audio classes and similarly apply BCE-loss to train audio BERT. For the sound separation task, our goal is to learn separate spectrogram masks for each individual object. Following [55], we use a binary mask which effectively corresponds to hard attention and use per-pixel sigmoid cross entropy loss (BCE-loss) to train the network. Implementation Details. We used PyTorch to implement our network2. We consider three3 random consecutive frames with size 224× 224× 3 as our input sequence for visual and pose BERT and use pre-trained ResNet50 [20] to extract global visual features for further processing. For the pose stream, we first predict 2D coordinates of body and finger key points of each frame using AlphaPose [12] and then use graph CNN [52] to generate feature vectors for each keypoint. Similar to prior works [15, 55], we sub-sample audio signals to 11KHz to reduce the computational cost and then select approximately 6s of audio by random cropping. To follow the "Mix-and-Separate" framework [11, 16, 17, 38, 55], we mix audio inputs and generate a time-frequency audio spectrogram using STFT with a Hann window size of 1022 and a hop length of 256. We then transform the spectrogram into the logfrequency scale to obtain the final 256× 256 time-frequency representation. The transformers for visual/pose and audio have a hidden state size of 1024 and 512, respectively, with 8 attention heads. We use the Adam optimizer with an initial learning rate of 1e−5 and batch size of 12 to train the network on 4 GTX 1080 GPUs for 6k epochs. Training takes approximately 192 hours. 3.2.2 Runtime Inference We use the MUSIC21 dataset [56] to train our network on two pretraining tasks: classification and sound source separation. We can use this network directly for sound separation on MUSIC21. We also fine-tune the pre-trained TriBERT on the MUSIC dataset [55] with 11 audio classes, which is a sub-set of the MUSIC21 dataset. We follow a fine-tuning strategy where we modify the classification layer from each pre-trained stream and then train our proposed model end-to-end with a learning rate of 1e−7 for 1500 epochs while keeping the rest of the hyper-parameters the same as the initial task. 4 Experiments Datasets. We consider the MUSIC21 dataset [56], which contains 1365 untrimmed videos of musical solos and duets from 21 instrument classes for the initial training of our TriBERT architecture. For fine-tuning, we use the MUSIC dataset [55], which is a subset of MUSIC21, containing 685 untrimmed videos of musical solos and duets from 11 instrument classes. 2https://github.com/ubc-vision/TriBERT 3BERT-based architectures, including ours, require large GPU memory and longer training time. Therefore, we use only three frames to reduce computational cost, but the number of frames can be easily increased with the same architecture (if resources allow). Further, we would like to highlight that a pose feature for one frame, actually takes into account T=256 frames of poses using a Spatial-Temporal Graph Convolutional Network. Therefore long-term contextual pose information is taken into account [52]. 4.1 Experiments for Sound Separation Evaluation Metrics. We use three common metrics to quantify the performance of sound separation: Signal-to-Distortion Ratio (SDR), Signal-to-Interference Ratio (SIR), and Signal-to-Artifact Ratio (SAR). We report all of the results with the widely used mir_eval library [41]. Baselines. The MUSIC21 dataset contains 1365 untrimmed videos, but we found 314 of those to be missing. Moreover, the train/val/test split was unavailable. As a result, for fair comparison, we trained our baselines [15, 55] with the available videos using an 80/20 train/test split. We use publicly available code4 to train "Sound-of-Pixels" [55]. For "MUSIC-Gesture" [15], we re-implemented the model by extracting pose features using Graph CNN [52]. Our reproduced results are comparable with those reported5. For the MUSIC dataset, we follow the experimental protocol from [42] and consider their reported results as our baselines. Quantitative and Qualitative Results. Table 1 shows the quantitative results for the sound separation pre-training task on the MUSIC21 dataset. Here, we include the performance of our method and baselines when we use only single-source videos (solos) or multi-source (solos+duets) to train all models. Our TriBERT outperforms (10.09 vs 8.08 for single-source in SDR) baseline models in all evaluation metrics. We then fine-tune our model on the MUSIC dataset with a train/val/test split from [16] (see Table 2). Our model again outperforms all baselines in all metrics (12.34 vs 9.29 in SDR). Figure 3 illustrates the corresponding qualitative results. The 1st, 2nd, and 3rd columns show the mixed video pairs and accompanying audio mixture, respectively. Columns 4 and 5 illustrate the ground-truth spectrogram mask while columns 6/7 and 8/9 show the predicted spectrogram mask by [15] and our method, respectively. Finally, the ground truth spectrogram, predicted spectrogram by [15], and our method are illustrated in columns 10/11, 12/13, and 14/15, respectively. It is clear that TriBERT, both quantitatively and qualitatively, outperforms the state-of-the-art in sound separation. 4.2 Multi-modal Retrieval Retrieval Variants. In this experiment, we analyze the semantic alignment between the 3 modalities that TriBERT learns to encode. This is done through cross-modal retrieval, where given a single or a pair of modality embeddings, we attempt to identify the matching embedding from a different modality. We consider 5 variants: audio→ vision, vision→ audio, audio→ pose, pose→ audio, and vision+audio→ pose. Throughout this section, we refer to the embedding we have as the query 4https://github.com/hangzhaomit/Sound-of-Pixels 5The reported SIR score in [15] is 15.81, which is close to our reimplementation of their method which achieves a score of 15.27. Our reproduced SDR score is a bit lower, compared to the 10.12 reported in [15]. However, this is perhaps expected given that 23% of the dataset was missing. embedding and the embedding we want to retrieve as the result embedding. We train and evaluate on the MUSIC21 dataset, using the same 80-20 train-test split used to learn TriBERT. We consider 2 types of embeddings for the 3 modalities. First, we use the transformer-based embeddings, consisting of the concatenations of the hidden representations hv0...v3, hp0...p3, and ha0...a3 for visual, pose, and audio, respectively. Additionally, we establish a baseline by training with the embeddings used as input to the three BERT streams. This baseline can be viewed as an ablation study for the transformer layers. Retrieval Training. Similar to [33], we train using an n-way multiple-choice setting. Here, n depends on the variant of the retrieval task, where n = 4 for the vision+audio to pose variant and n = 3 for the four remaining single-modality variants. In either case, one positive pair is used and n− 1 distractors are sampled. Further details are provided in the Supplemental Materials. We use an MLP that takes as input a fusion representation of both the query and result embeddings, computed as the element-wise product of the two. The module then outputs a single logit, interpreted as a binary prediction for whether the query and result embeddings are aligned. For the vision+audio→ pose variant, an additional MLP, based on [15], is used to combine the vision and audio embeddings before the final element-wise product with the pose embedding. Additionally, since both the transformerbased and pre-transformer embeddings are not consistent in shape across the three modalities, we also use linear layers as required to transform them to a consistent one. This overall retrieval network is trained end-to-end. For each multiple choice, the network computes an alignment score, after which a softmax is applied across all n scores. We train using a cross-entropy loss for 750 epochs with a batch size of 64 using the Adam optimizer with an initial learning rate of 2e-5. Retrieval Results. Figure 4 shows the qualitative results for two variants of retrieval. Additionally, Table 3 shows quantitative results for the 5 retrieval variants using the transformer-based representation, the baseline pre-transformer representation, and also a model that simply selects randomly from the pool. We see that retrieval using the transformer-based embeddings results in significantly better performance than the pre-transformer ones. This shows that the tri-modal co-attention modules are an integral component in learning a semantically meaningful relationship between the three modalities. Notably, in Table 3, we can see that vision+audio → pose is worse than audio → pose in top-1 accuracy. The performance of the two models is not necessarily directly comparable. Specifically, there are two issues that should be considered: - The input dimensionality and number of parameters of the vision+audio retrieval model is significantly larger, with an additional MLP layer used for fusion. This means that the vision+audio model is more prone to over-fitting, exhibited in the lower performance for top-1. Note that the top-5 and top-10 performance of vision+audio→ pose is better. - The number of distractors (n− 1) is different in the two settings. For single-modality retrieval variants, we use two distractors (negative pairings); while for the two-modality variant, we use three distractors. This may also marginally affect the performance, since in the n-way classification, having more distractors puts more focus on the negatives. However, we want to stress that the goal of these experiments is not to compare which modality or combination of modalities are best for retrieval. Instead, the goal is to illustrate the effectiveness of the TriBERT representations. Each of the five retrieval models is simply an instance of a retrieval task. We can use any alternative (more sophisticated) models for retrieval here. The key observation is that in all five cases, TriBERT representations perform significantly better in retrieval compared with baseline representations (used as input to TriBERT). This is strong evidence that TriBERT representations are effective. We make no claims with regards to optimality of the retrieval formulation or objective; it is simply used as a proxy for evaluating TriBERT representations. 5 Conclusion In this paper, we introduce TriBERT, a three-stream model with tri-modal co-attention blocks to generate a generic representation for multiple audio-visual tasks. We pre-train our model on the MUSIC21 dataset and show that our model exceeds state-of-the-art for sound separation6. We also find that TriBERT learns more generic and aligned multi-modal representations, exceeding on the cross-modal audio-visual-pose retrieval task. In this work, we limit ourselves to two datasets and fundamental audio-visual tasks. In the future, we plan to consider using more datasets and expanding to a broader set of tasks (e.g., generation). The role of positional embeddings should also be explored. Acknowledgments: This work was funded in part by the Vector Institute for AI, Canada CIFAR AI Chair, NSERC Canada Research Chair (CRC) and an NSERC Discovery and Discovery Accelerator Supplement Grants. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www. vectorinstitute.ai/#partners. Additional hardware support was provided by John R. Evans Leaders Fund CFI grant and Compute Canada under the Resource Allocation Competition award.
1. What is the main contribution of the paper, and how does it relate to the field of visual sound separation? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its application to human-centric audio-visual representation learning? 3. How does the reviewer assess the significance of the experiments conducted, and what kind of downstream tasks should be explored further? 4. What are the limitations of the presented quantitative results, and what ablation studies would enhance the paper's value? 5. Are there any concerns regarding the model design, such as the choice of backbone and modality integration?
Summary Of The Paper Review
Summary Of The Paper This work proposes TriBERT to handle the modalities of RGB image, poses and audios for tackling the task of sound source separation. The BERT model is used to extract features following ViLBERT. Experiments show appealing results on sound separation metrics compared with the author-reproduced previous SOTA methods. Review This paper illustrates a relatively reasonable story in its introduction, however, the story cannot be fully supported by the experiments conducted. The usage of the BERT model in the field of visual sound separation is quite interesting. The model descriptions are mostly clear enough for reproducibility. The reviewer thinks this paper is of certain value to the field of visual sound separation. ** The strengths of this paper are summarized as follows: ++ This paper is well written and easy to follow. ++ The idea of involving BERT in visual sound separation is interesting. ++ Most of the model design is reasonable and with care, which is important for application papers. ** The weaknesses of the paper are as follows: The conducted experiments do not match with the story in the introduction and the title. a) The title is “Human-centric” audio-visual representation learning, however, the experiments are conducted on MUSIC21 dataset with sound source separation task. I am not saying that separating instrument sound is not entirely “human-centric”, but this is only a relatively small area in the “human-centric” audio-visual field. I am expecting to see experiments conducted on human speech and its relationship with maybe co-speech gestures when I read the introduction. The experiments cannot support the title or the introduction at all. b) The core of ViLBERT or related work, is to use proxy tasks for training and show the effectiveness of learned features with downstream applications, while this paper basically focuses on sound separation. Although this is easy to understand: there are not so many downstream tasks if the model is trained on MUSIC dataset. The reviewer suggests that the authors either focus on visual sound separation (with a title like “TriBERT for visual sound separation”) or conduct experiments on large-scale “human-centric” datasets such as TED talks or AudioSet, and show the effectiveness of representation learning on more downstream tasks. No qualitative results and no video supplementary shown. Demo videos are extremely important for audio-visual papers. With no video results, it is impossible for reviewers to know the true performance of the method. Also, the authors do not provide any visual results of visual segmentation while the visual branch is designed for this. The retrieval experiments are weird. Though I understand the authors are trying to match the “representation learning” in the title, these kinds of experiments are useless and have never been conducted in related studies. The quantitative experiments are not persuasive. Although our community cares less about computational cost, the paper lacks a lot of ablation studies. For example, this paper leverages attention U-Net as backbone, which is rarely used in previous works. The entire visual network is more complicated than all previous work. And the three modalities in the TriBERT are not evaluated efficiently. What if one of the modality is erased? More needed ablations are not listed. Details. Why use only three frames which is the same as PixelPlayer for the pose branch? This is not reasonable given that the pose frames should cover more temporal information as performed in Music Gesture. Why Table1 is below Table2 and why evaluate on MUSIC alone given that it is a subset of MUSIC21? The settings are not reasonable. The rebuttal addresses some of my concerns and provides positive revision directions. However, it requires a major revision from the current form (the story needs to change, or large amounts of experiments should be shown), thus I still do not recommend acceptance of this paper. I raise my rating from 4 to 5 and encourage the authors to keep perfecting this paper.
NIPS
Title TriBERT: Human-centric Audio-visual Representation Learning Abstract The recent success of transformer models in language, such as BERT, has motivated the use of such architectures for multi-modal feature learning and tasks. However, most multi-modal variants (e.g., ViLBERT) have limited themselves to visuallinguistic data. Relatively few have explored its use in audio-visual modalities, and none, to our knowledge, illustrate them in the context of granular audio-visual detection or segmentation tasks such as sound source separation and localization. In this work, we introduce TriBERT – a transformer-based architecture, inspired by ViLBERT, which enables contextual feature learning across three modalities: vision, pose, and audio, with the use of flexible co-attention. The use of pose keypoints is inspired by recent works that illustrate that such representations can significantly boost performance in many audio-visual scenarios where often one or more persons are responsible for the sound explicitly (e.g., talking) or implicitly (e.g., sound produced as a function of human manipulating an object). From a technical perspective, as part of the TriBERT architecture, we introduce a learned visual tokenization scheme based on spatial attention and leverage weak-supervision to allow granular cross-modal interactions for visual and pose modalities. Further, we supplement learning with sound-source separation loss formulated across all three streams. We pre-train our model on the large MUSIC21 dataset and demonstrate improved performance in audio-visual sound source separation on that dataset as well as other datasets through fine-tuning. In addition, we show that the learned TriBERT representations are generic and significantly improve performance on other audio-visual tasks such as cross-modal audio-visual-pose retrieval by as much as 66.7% in top-1 accuracy. N/A The recent success of transformer models in language, such as BERT, has motivated the use of such architectures for multi-modal feature learning and tasks. However, most multi-modal variants (e.g., ViLBERT) have limited themselves to visuallinguistic data. Relatively few have explored its use in audio-visual modalities, and none, to our knowledge, illustrate them in the context of granular audio-visual detection or segmentation tasks such as sound source separation and localization. In this work, we introduce TriBERT – a transformer-based architecture, inspired by ViLBERT, which enables contextual feature learning across three modalities: vision, pose, and audio, with the use of flexible co-attention. The use of pose keypoints is inspired by recent works that illustrate that such representations can significantly boost performance in many audio-visual scenarios where often one or more persons are responsible for the sound explicitly (e.g., talking) or implicitly (e.g., sound produced as a function of human manipulating an object). From a technical perspective, as part of the TriBERT architecture, we introduce a learned visual tokenization scheme based on spatial attention and leverage weak-supervision to allow granular cross-modal interactions for visual and pose modalities. Further, we supplement learning with sound-source separation loss formulated across all three streams. We pre-train our model on the large MUSIC21 dataset and demonstrate improved performance in audio-visual sound source separation on that dataset as well as other datasets through fine-tuning. In addition, we show that the learned TriBERT representations are generic and significantly improve performance on other audio-visual tasks such as cross-modal audio-visual-pose retrieval by as much as 66.7% in top-1 accuracy. 1 Introduction Multi-modal audio-visual learning [57], which explores and leverages the relationship between visual and auditory modalities, has started to emerge as an important sub-field of machine learning and computer vision. Examples of typical tasks include: audio-visual separation and localization, where the goal is to segment sounds produced by individual objects in an audio and/or to localize those objects in a visual scene [15, 16, 42, 55]; and audio-visual correspondence, where the goal is often audio-visual retrieval [23, 47, 53]. Notably, some of the most recent audio-visual methods [15] leverage human pose keypoints, or landmarks, as an intermediate or contextual representation. This tends to improve the overall performance of sound separation, as pose and motion are important cues for characterising both the type of instrument being played and, potentially, over time, the rhythm of the individual piece [15]. It can also serve as an intermediate representation when generating video from acoustic signals [8, 44] for example. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Most of the existing architectures tend to extract features from the necessary modalities using pre-trained backbones (e.g., CNNs applied to video frames [55], object regions [16], and audio spectrograms; and/or graph CNN for human pose [15]) and then construct problem-specific architectures that often utilize simple late fusion for cross-modal integration in decoding (e.g., to produce spectrogram masks [15, 16, 55]). This is contrary to current trends in other multi-modal problem domains, where over the past few years, approaches have largely consolidated around generic multi-modal feature learning architectures that are task agnostic to produce contextualized feature representations and then fine-tune those representations to a variety of tasks (e.g., visual question answering (VQA) or reasoning (VCR)) and datasets. Examples of such architectures include ViLBERT [33], VL-BERT [46], and Unicoder-VL [31], all designed specifically for visual-linguistic tasks. Audio-visual representation learning has, in comparison, received much less attention. Most prior works [51] assume a single sound source per video and rely on audio-visual alignment objectives. Exceptions include [39], which relies on proposal mechanisms and multiple-instance learning [49] or co-clustering [25]. These approaches tend to integrate multi-modal features extracted using pre-trained feature extractors (e.g., CNNs) at a somewhat shallow level. The very recent variants [6, 28, 35] leverage transformers for audio-visual representation learning through simple classification [6] and self-supervised [28] or contrastive [35] learning objectives while only illustrating performance on video-level audio-visual action classification. To the best of our knowledge, no audio-visual representation learning approach to date has explored pose as one of the constituent modalities; nor has shown that feature integration and contextualization at a hierarchy of levels, as is the case for BERT-like architectures, can lead to improvements on granular audio-visual tasks such as audio-visual sound source separation. To address the aforementioned limitations, we formulate a human-centric audio-visual representation learning architecture, inspired by ViLBERT [33] and other transformer-based designs, with an explicit goal of improving the state-of-the-art in audio-visual sound source separation. Our transformer model takes three streams of information: video, audio, and (pose) keypoints and co-attends among those three modalities to arrive at enriched representations that can then be used for the final audiovisual sound separation task. We illustrate that these representations are general and also improve performance on other auxiliary tasks (e.g., forms of cross-modal audio-visual-pose retrieval). From a technical perspective, unlike ViLBERT and others, our model does not rely on global frame-wise features nor an external proposal mechanism. Instead, we leverage a learned attention to form visual tokens, akin to [42], and leverage weakly supervised objectives that avoid single sound-source assumptions for learning. In addition, we introduce spectrogram mask prediction as one of our pre-training tasks to enable the network to better learn task-specific contextualized features. Contributions: Foremost, we introduce a tri-modal VilBERT-inspired model, which we call TriBERT, that co-attends among visual, pose keypoint, and audio modalities to produce highly contextualized representations. We show that these representations, obtained by optimizing the model with respect to uni-modal (weakly-supervised) classification and sound separation pretraining objectives, produce features that improve audio-visual sound source separation at large and also work well on other downstream tasks. Further, to avoid reliance on the image proposal mechanisms, we formulate tokenization in the image stream in terms of learned attentional pooling, which is learned jointly. This alleviates the need for externally trained detection mechanisms, such as Faster R-CNN and variants. We illustrate competitive performance on a number of granular audio-visual tasks both by using the TriBERT model directly, using it as a feature extractor, or by fine-tuning it. 2 Related works Audio-visual Tasks. There exists a close relationship between visual scenes and the sounds that they produce. This relationship has been leveraged to complete various audio-visual tasks. Based on [57]’s survey of audio-visual deep learning, these tasks can be categorized into four subfields, three of which are addressed in this paper and described in the following three subsections. Audio-visual Sound Source Separation and Localization. Sound source separation and the related task of sound source localization have been studied quite extensively. Previous works studying separation, also known as the cocktail party problem [19], leverage multi-modal audio-visual information [11, 14] to help improve performance with respect to their audio-only counterparts [26, 34]. Examples include learning correlations between optical flow and masked frequencies [9, 13], using graphical models [21], detecting salient motion signals that correspond to audio events [30, 40], and extracting pose keypoints to model human movements [15]. A close connection between separation and localization has also been illustrated [40, 43, 55, 56]. For example, [16, 42] both formulate the task as one of auditory and visual co-segmentation, either with pre-trained object regions obtained by the detector [16] or directly from the image [42]. All of these approaches contain highly specialized architectures with custom fusion schemes. We aim to leverage the flexibility of transformer models to create generalized multi-modal representations that improve on audio-visual tasks. Audio-visual Representation Learning. The goal is typically to learn aligned representations. The quality of these representations has been shown to greatly impact the overall performance of tasks downstream [4]. A common strategy for representation learning is to introduce a proxy task. In the audio-visual space, past works [1, 2, 38] have trained networks by having them watch and listen to a large amount of unlabeled videos containing both positive samples of matching audio and visual pairs and negative samples of mismatched pairs; the proxy task is binary classification of whether or not the audio and visual match each other. Other proxy tasks include determining whether or not an audio-visual pair is time synchronized [27]; and [29] uses a classification task to identify the correct visual clip or audio stream from a set with negative samples. However, these works rely on the assumption that only one main sound source occurs at a time and everything else is background noise. Our model uses a weakly supervised proxy objective to learn representations for multiple sources of sound (two in experiments) occurring simultaneously and also learns to incorporate pose features. Audio-visual Correspondence Learning. One of the fundamental tasks in correspondence learning related to our work is cross-modality retrieval. Most prior works focus on audio-visual retrieval [24, 36, 48] and propose learning a joint embedding space where both modalities can be mapped to. In this space, semantically related embeddings are close to each other and thus retrieval can be performed by selecting the closest embedding to the query from the alternate modality. In our work, we demonstrate that enhanced feature representations obtained by our pretrained model capture aligned semantics and lead to much better cross-modal retrieval than baseline representations. Visiolinguistic Representation Learning. Our model is inspired by the recent successes of visiolinguistic representations. Most such approaches leverage a combination of uni-modal and cross-modal transformer modules to pre-train generic visiolinguistic representations on masked language and/or multi-modal alignment tasks. For example, [33] proposes separate streams for each modality that communicate with each other through co-attention, while [46] uses a single-stream model that takes both visual and linguistic embeddings as input. In our work, we also leverage co-attention modules to learn joint representations between audio, pose, and vision modalities. However, in addition to extending co-attention, we also focus on reformulating image tokenization and demonstrate the ability to learn with weakly-supervised classification objectives as opposed to masked token predictions. 3 Approach We introduce TriBERT, a network that learns a joint representation of three modalities: vision, pose, and audio. We briefly review ViLBERT, the architecture that inspired TriBERT, in Section 3.1. We then describe our TriBERT architecture in Section 3.2, including pretraining tasks and objectives. 3.1 Reviewing Vision-and-Language BERT (ViLBERT) Motivated by the recent success of the BERT architecture for transfer learning in language modeling, Lu et al. [33] proposed ViLBERT to represent text and visual content jointly. ViLBERT is a twostream model for image regions and text segments. Each stream is similar to the BERT architecture, containing a series of transformer blocks (TRM) [50]. Given an image I with corresponding regions-of-interest (RoIs) or bounding boxes v0, v1, ...vN and an input sentence S with word tokens w0, w1, ...wT , the final output representations are hv0, hv1, ..., hvN and hw0, hw1, ..., hwT for the visual and linguistic features, respectively. To exchange information between the two modalities, the authors introduced a co-attentional transformer layer which computes query (Q), key (K), and value (V ) pairs like a standard transformer block. The keys and values from each modality are then fed to the multi-headed attention block of the other modality. The attention block in each stream generates attention-pooled features conditioned on the other modality and outputs a multi-modal joint representation which outperforms single-stream models across multiple vision-and-language tasks. 3.2 TriBERT Architecture The architecture of our proposed TriBERT network is illustrated in Figure 1. Inspired by the recent success of ViLBERT in the vision-and-language domain, we modify its architecture to a three-stream network for vision, pose, and audio. Similar to ViLBERT [33], we use a bi-directional Transformer encoder [50] as the backbone network. However, TriBERT also introduces integral components that differentiate its architectural design. First, instead of using bounding box visual features generated by a pre-trained object detector [33] or CNN feature columns [7], TriBERT uses a jointly trained weakly supervised visual segmentation network. Our end-to-end segmentation network takes a sequence of consecutive frames to detect and segment objects, and the corresponding features are pooled and fed as tokens to the visual stream. Second, the pose tokens are characterized by per-person keypoints encoded using a Graph CNN, and the audio token is produced by the VGGish Network [22] applied to an audio spectogram. All three types of tokens form the input to TriBERT, which refines them using tri-modal co-attention to arrive at the final multi-modal representations. Training TriBERT requires the definition of proxy/pretraining tasks and the corresponding losses (see Section 3.2.1). Specifically, while we adopt token masking used in ViLBERT and others, we are unable to define classification targets per token in our visual and pose streams. This is because we only assume per-video labels (e.g., of instruments played) and no access to how those map to attended sounding regions or person instances involved. To address this, we introduce weakly-supervised classification losses for those two streams. Since only one global audio representation is used, this is unnecessary in the audio stream and standard cross-entropy classification can be employed. Finally, motivated by recent works that show that multi-task pretraining is beneficial for ViLBERT [32], we introduce an additional spectrogram mask prediction pretraining task which predicts spectrogram masks for each individual audio source from the input spectrogram (bottom block, Figure 1). Visual Representations. Unlike [33], we consider input video frames instead of detected object/bounding box features as our visual input and propose an end-to-end approach to detect and segment objects from each individual frame. Figure 2 illustrates our visual segmentation network which takes in RGB frames as input. To extract global features, we use ResNet50 [20] as the backbone network followed by a 3 × 3 convolution to generate H × W visual spatial features which are then fed into the segmentation network. Following [54], we use a decou- pled spatial neural attention structure to detect and localize discriminative objects simultaneously. The attention network has two branches: (1) Expansive attention detector, which aims to detect object regions and generate the expansive attention map SE ∈ RC×H×W (top branch of Figure 2); and (2) Discriminative attention detector, which aims to predict discriminative regions and generate the discriminative attention map SD ∈ RC×H×W (bottom branch of Figure 2). The expansive attention detector contains a drop-out layer followed by a 1 × 1 convolution, another drop-out layer, a non-linear activation, and a spatial-normalization layer, defined as follows: λc(i,j) = F (W T c Ve(:, i, j) + b c), (1) αc(i,j) = λc(i,j)∑H i ∑W j λ c (i,j) , (2) where c ∈ C and F (·) denote number of classes and the non-linear activation function, respectively. The final attention map (Am) is generated as : Am = SE SD, where denotes element-wise multiplication. A spatial average pooling is applied on Am to generate a classification score for each corresponding class and pooled-out top two class features from spatial-visual feature (Ve). The resultant 3× 2× 1024 visual embeddings are used to train our proposed TriBERT architecture, where 3 corresponds to the number of frames and 2 to the number of "objects" per frame. Keypoint (pose) Representations. Our goal is to capture human body and finger movement through keypoint representations. Therefore, we extract 26 keypoints for body joints and 21 keypoints for each hand using the AlphaPose toolbox [12]. As a result, we identify the 2D (x, y) coordinates and corresponding confidence scores of 68 body joints. Following [15], we use Graph CNN to generate semantic context comprising of those joints. Similar to prior work [52] on action recognition, we construct a Spatial-Temporal Graph Convolutional Network G = {V,E} where each node vi ∈ {V } corresponds to the body joint’s keypoint and each edge ei ∈ {E} the natural connectivity between those keypoints. We use 2D coordinates of the detected body joints with confidence scores as input to each node and construct a spatial-temporal graph by: (1) connecting human body joints within a single frame according to body structure; and (2) connecting each joint with the same joint from the consecutive frames. This way, multiple layers of spatial-temporal graph convolutions are constructed to generate higher-level features for human keypoints. We use publicly available code1 to re-train their model on our dataset and extract body joint features of size 2 × 256 × 68 before the final classification layer (corresponding to two person instances). We apply a linear layer to transform these to 3× 2× 1024 input embeddings for pose BERT where 3 corresponds to the number of visual frames and 2 to maximum number of persons per frame. Audio Representations. Consistent with prior works, we use a time-frequency representation of the input audio. We apply STFT [18] to generate the corresponding spectrogram and then transform the magnitudes of the spectrogram into the log-frequency scale for further processing. The size of the final input audio spectrogram is 1× 256× 256 and is used in two ways: (1) as an audio embedding for audio BERT; and (2) as the input audio for attention U-net for the task of sound source separation, which predicts individual audio spectrogram masks (see Figure 1). Before passing to audio BERT, we use a VGGish Network [22] to extract global features for input audio embedding. 1https://github.com/yysijie/st-gcn Tri-modal Co-attention. Recent works [3, 33] propose co-attentional transformer layers to generate effective representations of vision conditioned on language and vice versa. Multi-head attention Add & Norm Add & Norm Feed Forward Visual HV (i+1) HV (i) Multi-head attention Add & Norm Add & Norm Feed Forward Pose HP (j+1) HP (j) Multi-head attention Add & Norm Add & Norm Feed Forward Audio HA (k+1) HA (k) QV K(P,A) V(P,A) QP K(V,A) V(V,A) QA K(P,V) V(P,V) In this paper, we introduce a tri-modal coattentional layer, illustrated on the right, by extending ViLBERT’s co-attentional transformer layers [33]. Given intermediate representations for vision, pose, and audio, denoted as HV (i), HP (j), and HA(k), respectively, each stream computes individual query (Q), key (K), and value (V ) matrices. The keys and values from two modalities are then concatenated together and fed as input to the multi-head attention block of the third modality. As a result, the block generates attention features conditioned on the other two modalities. We keep the rest of the architecture, such as the feed forward layers, residual connections, etc. the same as a standard transformer block, which is then used to generate effective multi-modal features. 3.2.1 Training Tasks We pre-train TriBERT jointly on two tasks: instrument classification and sound source separation. Our proposed architecture has three separate streams and each stream performs an individual classification task. To train our TriBERT model, we use the MUSIC21 dataset [56], which contains 21 instruments. Weakly-supervised Visual and Pose Classification. Our visual segmentation network generates attention features for input video frames. We then apply a spatial pooling, and the resulting feature vector is fed into the visual BERT. We use a special <SOS> token at the beginning of the input frame sequence to represent the entire visual input. Following [33], we apply masking to approximately 15% of the input image regions (see Figure 1). The output of the visual BERT is a sequence of hidden representations hv0, hv1, ..hvN conditioned on the pose and audio modalities. We use mean pooling of all hidden representations to perform classification for the detected objects. Similarly, pose BERT generates a sequence of hidden representations hp0, hp1, ..hpN conditioned on the visual and audio modalities, and we apply classification based on the mean pooling of all hidden states. Due to the lack of instance annotations, we cannot use region/pose level supervision. Following [5], we use a weakly-supervised approach to perform region selection and classification. Audio Classification. Since we do not have a sequence of audio embeddings, we artificially create an audio sequence for computational convenience by repeating the VGGish audio feature to generate a sequence of hidden representations ha0, ha1, ..haN conditioned on the visual and pose modalities. This is done purely for engineering convenience to allow consistent use of tri-modal co-attention across modalities. We then apply audio classification on the mean feature of all hidden representations. Multi-modal Sound Source Separation. We consider sound source separation as one of our initial tasks and follow the "Mix-and-Separate" framework [11, 16, 17, 38, 55], a well-known approach to solve this problem. The goal is to mix multiple audio signals to generate an artificially complex auditory representation and then learn to separate individual sounds from the mixture. Given two input videos V1 and V2 with accompanying audio A1(t) and A2(t), we mix A1 and A2 to generate a complex audio signal mixture Am(t) = A1(t) +A2(t). Suppose V1 has two objects o1′ and o1′′ with accompanying audio a1′ and a1′′ while V2 has one object o2′ with audio a2′. The goal is to separate sounds a1′, a1′′, and a2′ from the mixture Am(t) by predicting spectrogram masks using attention U-net [37], which takes in the mixed spectrogram as input. Attention U-net contains 7 convolutions and 7 de-convolutions with skip connections. The skip connections use attention gates (AG) comprise simple additive soft attentions to highlight relevant regions of the audio spectrograms. The overhead of attention U-Net over U-Net is fairly minimal. Specifically, in terms of the number of parameters, attention U-Net contains a modest 9% more parameters as compared to U-Net and the inference speed is only 7% slower [37]. The attention U-net outputs the final magnitude of the spectrogram mask (bottom branch in Figure 1) guided by audio-visual-pose features. Following [15], we adopt a self-attention based early fusion between the bottle-neck of attention U-net with the fused features (i.e. concatenation of features) corresponding to the <SOS> tokens of three BERT streams. We combine the predicted magnitude of the spectrogram mask from attention U-net with the phase of the input spectrogram and then use inverse STFT [18] to get back the wave-form of the prediction. Training Objective. We consider weakly-supervised classification for the visual and pose modalities. Following [5], we use two data streams from the hidden state of each modality. The first stream corresponds to a class score (βclass) for each individual region to perform recognition. This is achieved by a linear layer followed by a softmax operation (see Eq. 3). The second stream computes a probability distribution (βdet) for performing a proxy detection. This is done by using another linear layer followed by another softmax operation (see Eq.4) as follows: βclass(h c)ij = eh c ij∑C t=1 e hctj , (3) βdet(hd)ij = eh d ij∑|R| t=1 e hdtj , (4) where hc ∈ RC×|R|, hd ∈ RC×|R| and C denotes the number of classes. We then aggregate the recognition and detection scores to predict the class of all image regions as follows: βR = βclass(h c) βdet(hd), where denotes an element-wise product of the two scoring metrics. Finally we apply BCE-loss [10] to train visual and pose BERT. For audio classification, we consider a classification layer to predict audio classes and similarly apply BCE-loss to train audio BERT. For the sound separation task, our goal is to learn separate spectrogram masks for each individual object. Following [55], we use a binary mask which effectively corresponds to hard attention and use per-pixel sigmoid cross entropy loss (BCE-loss) to train the network. Implementation Details. We used PyTorch to implement our network2. We consider three3 random consecutive frames with size 224× 224× 3 as our input sequence for visual and pose BERT and use pre-trained ResNet50 [20] to extract global visual features for further processing. For the pose stream, we first predict 2D coordinates of body and finger key points of each frame using AlphaPose [12] and then use graph CNN [52] to generate feature vectors for each keypoint. Similar to prior works [15, 55], we sub-sample audio signals to 11KHz to reduce the computational cost and then select approximately 6s of audio by random cropping. To follow the "Mix-and-Separate" framework [11, 16, 17, 38, 55], we mix audio inputs and generate a time-frequency audio spectrogram using STFT with a Hann window size of 1022 and a hop length of 256. We then transform the spectrogram into the logfrequency scale to obtain the final 256× 256 time-frequency representation. The transformers for visual/pose and audio have a hidden state size of 1024 and 512, respectively, with 8 attention heads. We use the Adam optimizer with an initial learning rate of 1e−5 and batch size of 12 to train the network on 4 GTX 1080 GPUs for 6k epochs. Training takes approximately 192 hours. 3.2.2 Runtime Inference We use the MUSIC21 dataset [56] to train our network on two pretraining tasks: classification and sound source separation. We can use this network directly for sound separation on MUSIC21. We also fine-tune the pre-trained TriBERT on the MUSIC dataset [55] with 11 audio classes, which is a sub-set of the MUSIC21 dataset. We follow a fine-tuning strategy where we modify the classification layer from each pre-trained stream and then train our proposed model end-to-end with a learning rate of 1e−7 for 1500 epochs while keeping the rest of the hyper-parameters the same as the initial task. 4 Experiments Datasets. We consider the MUSIC21 dataset [56], which contains 1365 untrimmed videos of musical solos and duets from 21 instrument classes for the initial training of our TriBERT architecture. For fine-tuning, we use the MUSIC dataset [55], which is a subset of MUSIC21, containing 685 untrimmed videos of musical solos and duets from 11 instrument classes. 2https://github.com/ubc-vision/TriBERT 3BERT-based architectures, including ours, require large GPU memory and longer training time. Therefore, we use only three frames to reduce computational cost, but the number of frames can be easily increased with the same architecture (if resources allow). Further, we would like to highlight that a pose feature for one frame, actually takes into account T=256 frames of poses using a Spatial-Temporal Graph Convolutional Network. Therefore long-term contextual pose information is taken into account [52]. 4.1 Experiments for Sound Separation Evaluation Metrics. We use three common metrics to quantify the performance of sound separation: Signal-to-Distortion Ratio (SDR), Signal-to-Interference Ratio (SIR), and Signal-to-Artifact Ratio (SAR). We report all of the results with the widely used mir_eval library [41]. Baselines. The MUSIC21 dataset contains 1365 untrimmed videos, but we found 314 of those to be missing. Moreover, the train/val/test split was unavailable. As a result, for fair comparison, we trained our baselines [15, 55] with the available videos using an 80/20 train/test split. We use publicly available code4 to train "Sound-of-Pixels" [55]. For "MUSIC-Gesture" [15], we re-implemented the model by extracting pose features using Graph CNN [52]. Our reproduced results are comparable with those reported5. For the MUSIC dataset, we follow the experimental protocol from [42] and consider their reported results as our baselines. Quantitative and Qualitative Results. Table 1 shows the quantitative results for the sound separation pre-training task on the MUSIC21 dataset. Here, we include the performance of our method and baselines when we use only single-source videos (solos) or multi-source (solos+duets) to train all models. Our TriBERT outperforms (10.09 vs 8.08 for single-source in SDR) baseline models in all evaluation metrics. We then fine-tune our model on the MUSIC dataset with a train/val/test split from [16] (see Table 2). Our model again outperforms all baselines in all metrics (12.34 vs 9.29 in SDR). Figure 3 illustrates the corresponding qualitative results. The 1st, 2nd, and 3rd columns show the mixed video pairs and accompanying audio mixture, respectively. Columns 4 and 5 illustrate the ground-truth spectrogram mask while columns 6/7 and 8/9 show the predicted spectrogram mask by [15] and our method, respectively. Finally, the ground truth spectrogram, predicted spectrogram by [15], and our method are illustrated in columns 10/11, 12/13, and 14/15, respectively. It is clear that TriBERT, both quantitatively and qualitatively, outperforms the state-of-the-art in sound separation. 4.2 Multi-modal Retrieval Retrieval Variants. In this experiment, we analyze the semantic alignment between the 3 modalities that TriBERT learns to encode. This is done through cross-modal retrieval, where given a single or a pair of modality embeddings, we attempt to identify the matching embedding from a different modality. We consider 5 variants: audio→ vision, vision→ audio, audio→ pose, pose→ audio, and vision+audio→ pose. Throughout this section, we refer to the embedding we have as the query 4https://github.com/hangzhaomit/Sound-of-Pixels 5The reported SIR score in [15] is 15.81, which is close to our reimplementation of their method which achieves a score of 15.27. Our reproduced SDR score is a bit lower, compared to the 10.12 reported in [15]. However, this is perhaps expected given that 23% of the dataset was missing. embedding and the embedding we want to retrieve as the result embedding. We train and evaluate on the MUSIC21 dataset, using the same 80-20 train-test split used to learn TriBERT. We consider 2 types of embeddings for the 3 modalities. First, we use the transformer-based embeddings, consisting of the concatenations of the hidden representations hv0...v3, hp0...p3, and ha0...a3 for visual, pose, and audio, respectively. Additionally, we establish a baseline by training with the embeddings used as input to the three BERT streams. This baseline can be viewed as an ablation study for the transformer layers. Retrieval Training. Similar to [33], we train using an n-way multiple-choice setting. Here, n depends on the variant of the retrieval task, where n = 4 for the vision+audio to pose variant and n = 3 for the four remaining single-modality variants. In either case, one positive pair is used and n− 1 distractors are sampled. Further details are provided in the Supplemental Materials. We use an MLP that takes as input a fusion representation of both the query and result embeddings, computed as the element-wise product of the two. The module then outputs a single logit, interpreted as a binary prediction for whether the query and result embeddings are aligned. For the vision+audio→ pose variant, an additional MLP, based on [15], is used to combine the vision and audio embeddings before the final element-wise product with the pose embedding. Additionally, since both the transformerbased and pre-transformer embeddings are not consistent in shape across the three modalities, we also use linear layers as required to transform them to a consistent one. This overall retrieval network is trained end-to-end. For each multiple choice, the network computes an alignment score, after which a softmax is applied across all n scores. We train using a cross-entropy loss for 750 epochs with a batch size of 64 using the Adam optimizer with an initial learning rate of 2e-5. Retrieval Results. Figure 4 shows the qualitative results for two variants of retrieval. Additionally, Table 3 shows quantitative results for the 5 retrieval variants using the transformer-based representation, the baseline pre-transformer representation, and also a model that simply selects randomly from the pool. We see that retrieval using the transformer-based embeddings results in significantly better performance than the pre-transformer ones. This shows that the tri-modal co-attention modules are an integral component in learning a semantically meaningful relationship between the three modalities. Notably, in Table 3, we can see that vision+audio → pose is worse than audio → pose in top-1 accuracy. The performance of the two models is not necessarily directly comparable. Specifically, there are two issues that should be considered: - The input dimensionality and number of parameters of the vision+audio retrieval model is significantly larger, with an additional MLP layer used for fusion. This means that the vision+audio model is more prone to over-fitting, exhibited in the lower performance for top-1. Note that the top-5 and top-10 performance of vision+audio→ pose is better. - The number of distractors (n− 1) is different in the two settings. For single-modality retrieval variants, we use two distractors (negative pairings); while for the two-modality variant, we use three distractors. This may also marginally affect the performance, since in the n-way classification, having more distractors puts more focus on the negatives. However, we want to stress that the goal of these experiments is not to compare which modality or combination of modalities are best for retrieval. Instead, the goal is to illustrate the effectiveness of the TriBERT representations. Each of the five retrieval models is simply an instance of a retrieval task. We can use any alternative (more sophisticated) models for retrieval here. The key observation is that in all five cases, TriBERT representations perform significantly better in retrieval compared with baseline representations (used as input to TriBERT). This is strong evidence that TriBERT representations are effective. We make no claims with regards to optimality of the retrieval formulation or objective; it is simply used as a proxy for evaluating TriBERT representations. 5 Conclusion In this paper, we introduce TriBERT, a three-stream model with tri-modal co-attention blocks to generate a generic representation for multiple audio-visual tasks. We pre-train our model on the MUSIC21 dataset and show that our model exceeds state-of-the-art for sound separation6. We also find that TriBERT learns more generic and aligned multi-modal representations, exceeding on the cross-modal audio-visual-pose retrieval task. In this work, we limit ourselves to two datasets and fundamental audio-visual tasks. In the future, we plan to consider using more datasets and expanding to a broader set of tasks (e.g., generation). The role of positional embeddings should also be explored. Acknowledgments: This work was funded in part by the Vector Institute for AI, Canada CIFAR AI Chair, NSERC Canada Research Chair (CRC) and an NSERC Discovery and Discovery Accelerator Supplement Grants. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www. vectorinstitute.ai/#partners. Additional hardware support was provided by John R. Evans Leaders Fund CFI grant and Compute Canada under the Resource Allocation Competition award.
1. What is the focus and contribution of the paper on audio-visual representation learning? 2. What are the strengths of the proposed TriBERT model, particularly in its ability to accept three modalities? 3. What are the weaknesses of the paper, especially regarding its novelty and experimentation? 4. Do you have any concerns about the limitation of the paper's contributions? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper By pointing out the recent transformer-based models are mostly designed for visual-language data, this paper introduce TriBERT which specifically targeting on audio-visual modalities to address such limitation. Inspired by ViLBERT, TriBERT is a transformer based model which enables contextual feature learning across three modalities: vision, pose and audio with the use of flexible co-attention. Besides, this paper introduce a learned visual tokenization scheme based on spatial attention and leverage weak-supervision to allow granular cross-modal interactions for visual and pose modalities. To demonstrate the proposed approach, TriBERT is pretrained on MUSIC221 and show improved performance in audio-visual sound source separation and other audio-visual-pose retrieval. Review This paper introduces TriBERT, a three-stream model which can accept three modalities, i.e. audio, visual, pose. By utilizing tri-modal co-attention blocks, generic audio-visual representations are achieved for audio-visual tasks of sound source separation and audio-visual-pose retrieval. This work of extending the usage of transformer-based models for more modalities other than just visual and language, i.e. audio-visual, is well motivated. However, my main concern is the novelty of this work is limited. Though, to extending transformer to audio and pose is not a trivial problem, the approach been used is straightforward. The whole TriBERT model is kind of a combination of some existing building blocks, e.g. BERT, graph CNN, segmentation net, Vggish net, UNet, etc. The training objective is designed as a multi-task manner, i.e. visual and pose classification, audio classification, and multi-modal sound source separation. From both aspects of model design, and algorithm/training objective, I do see much novelty appears. Besides, the experiments are not strong enough. First, with comparing with only a few SOTA approaches, I can not be fully convinced. Also, why there no comparisons with sota for retrieval task? For many recent audio-visual representation learning work, the approaches are usually demonstrated on 2-3 audio-visual tasks, e.g. audio event classification, and activity classification. I am wondering would these tasks can be considered to demonstrate the authors' claim of 'learning generic audio-visual representation'? I also did not see any ablation studies of the proposed approach. For example, the authors claim they proposed a new learned visual tokenization scheme, but no experimental results are shown to demonstrate the effectiveness of this tokenization scheme.
NIPS
Title TriBERT: Human-centric Audio-visual Representation Learning Abstract The recent success of transformer models in language, such as BERT, has motivated the use of such architectures for multi-modal feature learning and tasks. However, most multi-modal variants (e.g., ViLBERT) have limited themselves to visuallinguistic data. Relatively few have explored its use in audio-visual modalities, and none, to our knowledge, illustrate them in the context of granular audio-visual detection or segmentation tasks such as sound source separation and localization. In this work, we introduce TriBERT – a transformer-based architecture, inspired by ViLBERT, which enables contextual feature learning across three modalities: vision, pose, and audio, with the use of flexible co-attention. The use of pose keypoints is inspired by recent works that illustrate that such representations can significantly boost performance in many audio-visual scenarios where often one or more persons are responsible for the sound explicitly (e.g., talking) or implicitly (e.g., sound produced as a function of human manipulating an object). From a technical perspective, as part of the TriBERT architecture, we introduce a learned visual tokenization scheme based on spatial attention and leverage weak-supervision to allow granular cross-modal interactions for visual and pose modalities. Further, we supplement learning with sound-source separation loss formulated across all three streams. We pre-train our model on the large MUSIC21 dataset and demonstrate improved performance in audio-visual sound source separation on that dataset as well as other datasets through fine-tuning. In addition, we show that the learned TriBERT representations are generic and significantly improve performance on other audio-visual tasks such as cross-modal audio-visual-pose retrieval by as much as 66.7% in top-1 accuracy. N/A The recent success of transformer models in language, such as BERT, has motivated the use of such architectures for multi-modal feature learning and tasks. However, most multi-modal variants (e.g., ViLBERT) have limited themselves to visuallinguistic data. Relatively few have explored its use in audio-visual modalities, and none, to our knowledge, illustrate them in the context of granular audio-visual detection or segmentation tasks such as sound source separation and localization. In this work, we introduce TriBERT – a transformer-based architecture, inspired by ViLBERT, which enables contextual feature learning across three modalities: vision, pose, and audio, with the use of flexible co-attention. The use of pose keypoints is inspired by recent works that illustrate that such representations can significantly boost performance in many audio-visual scenarios where often one or more persons are responsible for the sound explicitly (e.g., talking) or implicitly (e.g., sound produced as a function of human manipulating an object). From a technical perspective, as part of the TriBERT architecture, we introduce a learned visual tokenization scheme based on spatial attention and leverage weak-supervision to allow granular cross-modal interactions for visual and pose modalities. Further, we supplement learning with sound-source separation loss formulated across all three streams. We pre-train our model on the large MUSIC21 dataset and demonstrate improved performance in audio-visual sound source separation on that dataset as well as other datasets through fine-tuning. In addition, we show that the learned TriBERT representations are generic and significantly improve performance on other audio-visual tasks such as cross-modal audio-visual-pose retrieval by as much as 66.7% in top-1 accuracy. 1 Introduction Multi-modal audio-visual learning [57], which explores and leverages the relationship between visual and auditory modalities, has started to emerge as an important sub-field of machine learning and computer vision. Examples of typical tasks include: audio-visual separation and localization, where the goal is to segment sounds produced by individual objects in an audio and/or to localize those objects in a visual scene [15, 16, 42, 55]; and audio-visual correspondence, where the goal is often audio-visual retrieval [23, 47, 53]. Notably, some of the most recent audio-visual methods [15] leverage human pose keypoints, or landmarks, as an intermediate or contextual representation. This tends to improve the overall performance of sound separation, as pose and motion are important cues for characterising both the type of instrument being played and, potentially, over time, the rhythm of the individual piece [15]. It can also serve as an intermediate representation when generating video from acoustic signals [8, 44] for example. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Most of the existing architectures tend to extract features from the necessary modalities using pre-trained backbones (e.g., CNNs applied to video frames [55], object regions [16], and audio spectrograms; and/or graph CNN for human pose [15]) and then construct problem-specific architectures that often utilize simple late fusion for cross-modal integration in decoding (e.g., to produce spectrogram masks [15, 16, 55]). This is contrary to current trends in other multi-modal problem domains, where over the past few years, approaches have largely consolidated around generic multi-modal feature learning architectures that are task agnostic to produce contextualized feature representations and then fine-tune those representations to a variety of tasks (e.g., visual question answering (VQA) or reasoning (VCR)) and datasets. Examples of such architectures include ViLBERT [33], VL-BERT [46], and Unicoder-VL [31], all designed specifically for visual-linguistic tasks. Audio-visual representation learning has, in comparison, received much less attention. Most prior works [51] assume a single sound source per video and rely on audio-visual alignment objectives. Exceptions include [39], which relies on proposal mechanisms and multiple-instance learning [49] or co-clustering [25]. These approaches tend to integrate multi-modal features extracted using pre-trained feature extractors (e.g., CNNs) at a somewhat shallow level. The very recent variants [6, 28, 35] leverage transformers for audio-visual representation learning through simple classification [6] and self-supervised [28] or contrastive [35] learning objectives while only illustrating performance on video-level audio-visual action classification. To the best of our knowledge, no audio-visual representation learning approach to date has explored pose as one of the constituent modalities; nor has shown that feature integration and contextualization at a hierarchy of levels, as is the case for BERT-like architectures, can lead to improvements on granular audio-visual tasks such as audio-visual sound source separation. To address the aforementioned limitations, we formulate a human-centric audio-visual representation learning architecture, inspired by ViLBERT [33] and other transformer-based designs, with an explicit goal of improving the state-of-the-art in audio-visual sound source separation. Our transformer model takes three streams of information: video, audio, and (pose) keypoints and co-attends among those three modalities to arrive at enriched representations that can then be used for the final audiovisual sound separation task. We illustrate that these representations are general and also improve performance on other auxiliary tasks (e.g., forms of cross-modal audio-visual-pose retrieval). From a technical perspective, unlike ViLBERT and others, our model does not rely on global frame-wise features nor an external proposal mechanism. Instead, we leverage a learned attention to form visual tokens, akin to [42], and leverage weakly supervised objectives that avoid single sound-source assumptions for learning. In addition, we introduce spectrogram mask prediction as one of our pre-training tasks to enable the network to better learn task-specific contextualized features. Contributions: Foremost, we introduce a tri-modal VilBERT-inspired model, which we call TriBERT, that co-attends among visual, pose keypoint, and audio modalities to produce highly contextualized representations. We show that these representations, obtained by optimizing the model with respect to uni-modal (weakly-supervised) classification and sound separation pretraining objectives, produce features that improve audio-visual sound source separation at large and also work well on other downstream tasks. Further, to avoid reliance on the image proposal mechanisms, we formulate tokenization in the image stream in terms of learned attentional pooling, which is learned jointly. This alleviates the need for externally trained detection mechanisms, such as Faster R-CNN and variants. We illustrate competitive performance on a number of granular audio-visual tasks both by using the TriBERT model directly, using it as a feature extractor, or by fine-tuning it. 2 Related works Audio-visual Tasks. There exists a close relationship between visual scenes and the sounds that they produce. This relationship has been leveraged to complete various audio-visual tasks. Based on [57]’s survey of audio-visual deep learning, these tasks can be categorized into four subfields, three of which are addressed in this paper and described in the following three subsections. Audio-visual Sound Source Separation and Localization. Sound source separation and the related task of sound source localization have been studied quite extensively. Previous works studying separation, also known as the cocktail party problem [19], leverage multi-modal audio-visual information [11, 14] to help improve performance with respect to their audio-only counterparts [26, 34]. Examples include learning correlations between optical flow and masked frequencies [9, 13], using graphical models [21], detecting salient motion signals that correspond to audio events [30, 40], and extracting pose keypoints to model human movements [15]. A close connection between separation and localization has also been illustrated [40, 43, 55, 56]. For example, [16, 42] both formulate the task as one of auditory and visual co-segmentation, either with pre-trained object regions obtained by the detector [16] or directly from the image [42]. All of these approaches contain highly specialized architectures with custom fusion schemes. We aim to leverage the flexibility of transformer models to create generalized multi-modal representations that improve on audio-visual tasks. Audio-visual Representation Learning. The goal is typically to learn aligned representations. The quality of these representations has been shown to greatly impact the overall performance of tasks downstream [4]. A common strategy for representation learning is to introduce a proxy task. In the audio-visual space, past works [1, 2, 38] have trained networks by having them watch and listen to a large amount of unlabeled videos containing both positive samples of matching audio and visual pairs and negative samples of mismatched pairs; the proxy task is binary classification of whether or not the audio and visual match each other. Other proxy tasks include determining whether or not an audio-visual pair is time synchronized [27]; and [29] uses a classification task to identify the correct visual clip or audio stream from a set with negative samples. However, these works rely on the assumption that only one main sound source occurs at a time and everything else is background noise. Our model uses a weakly supervised proxy objective to learn representations for multiple sources of sound (two in experiments) occurring simultaneously and also learns to incorporate pose features. Audio-visual Correspondence Learning. One of the fundamental tasks in correspondence learning related to our work is cross-modality retrieval. Most prior works focus on audio-visual retrieval [24, 36, 48] and propose learning a joint embedding space where both modalities can be mapped to. In this space, semantically related embeddings are close to each other and thus retrieval can be performed by selecting the closest embedding to the query from the alternate modality. In our work, we demonstrate that enhanced feature representations obtained by our pretrained model capture aligned semantics and lead to much better cross-modal retrieval than baseline representations. Visiolinguistic Representation Learning. Our model is inspired by the recent successes of visiolinguistic representations. Most such approaches leverage a combination of uni-modal and cross-modal transformer modules to pre-train generic visiolinguistic representations on masked language and/or multi-modal alignment tasks. For example, [33] proposes separate streams for each modality that communicate with each other through co-attention, while [46] uses a single-stream model that takes both visual and linguistic embeddings as input. In our work, we also leverage co-attention modules to learn joint representations between audio, pose, and vision modalities. However, in addition to extending co-attention, we also focus on reformulating image tokenization and demonstrate the ability to learn with weakly-supervised classification objectives as opposed to masked token predictions. 3 Approach We introduce TriBERT, a network that learns a joint representation of three modalities: vision, pose, and audio. We briefly review ViLBERT, the architecture that inspired TriBERT, in Section 3.1. We then describe our TriBERT architecture in Section 3.2, including pretraining tasks and objectives. 3.1 Reviewing Vision-and-Language BERT (ViLBERT) Motivated by the recent success of the BERT architecture for transfer learning in language modeling, Lu et al. [33] proposed ViLBERT to represent text and visual content jointly. ViLBERT is a twostream model for image regions and text segments. Each stream is similar to the BERT architecture, containing a series of transformer blocks (TRM) [50]. Given an image I with corresponding regions-of-interest (RoIs) or bounding boxes v0, v1, ...vN and an input sentence S with word tokens w0, w1, ...wT , the final output representations are hv0, hv1, ..., hvN and hw0, hw1, ..., hwT for the visual and linguistic features, respectively. To exchange information between the two modalities, the authors introduced a co-attentional transformer layer which computes query (Q), key (K), and value (V ) pairs like a standard transformer block. The keys and values from each modality are then fed to the multi-headed attention block of the other modality. The attention block in each stream generates attention-pooled features conditioned on the other modality and outputs a multi-modal joint representation which outperforms single-stream models across multiple vision-and-language tasks. 3.2 TriBERT Architecture The architecture of our proposed TriBERT network is illustrated in Figure 1. Inspired by the recent success of ViLBERT in the vision-and-language domain, we modify its architecture to a three-stream network for vision, pose, and audio. Similar to ViLBERT [33], we use a bi-directional Transformer encoder [50] as the backbone network. However, TriBERT also introduces integral components that differentiate its architectural design. First, instead of using bounding box visual features generated by a pre-trained object detector [33] or CNN feature columns [7], TriBERT uses a jointly trained weakly supervised visual segmentation network. Our end-to-end segmentation network takes a sequence of consecutive frames to detect and segment objects, and the corresponding features are pooled and fed as tokens to the visual stream. Second, the pose tokens are characterized by per-person keypoints encoded using a Graph CNN, and the audio token is produced by the VGGish Network [22] applied to an audio spectogram. All three types of tokens form the input to TriBERT, which refines them using tri-modal co-attention to arrive at the final multi-modal representations. Training TriBERT requires the definition of proxy/pretraining tasks and the corresponding losses (see Section 3.2.1). Specifically, while we adopt token masking used in ViLBERT and others, we are unable to define classification targets per token in our visual and pose streams. This is because we only assume per-video labels (e.g., of instruments played) and no access to how those map to attended sounding regions or person instances involved. To address this, we introduce weakly-supervised classification losses for those two streams. Since only one global audio representation is used, this is unnecessary in the audio stream and standard cross-entropy classification can be employed. Finally, motivated by recent works that show that multi-task pretraining is beneficial for ViLBERT [32], we introduce an additional spectrogram mask prediction pretraining task which predicts spectrogram masks for each individual audio source from the input spectrogram (bottom block, Figure 1). Visual Representations. Unlike [33], we consider input video frames instead of detected object/bounding box features as our visual input and propose an end-to-end approach to detect and segment objects from each individual frame. Figure 2 illustrates our visual segmentation network which takes in RGB frames as input. To extract global features, we use ResNet50 [20] as the backbone network followed by a 3 × 3 convolution to generate H × W visual spatial features which are then fed into the segmentation network. Following [54], we use a decou- pled spatial neural attention structure to detect and localize discriminative objects simultaneously. The attention network has two branches: (1) Expansive attention detector, which aims to detect object regions and generate the expansive attention map SE ∈ RC×H×W (top branch of Figure 2); and (2) Discriminative attention detector, which aims to predict discriminative regions and generate the discriminative attention map SD ∈ RC×H×W (bottom branch of Figure 2). The expansive attention detector contains a drop-out layer followed by a 1 × 1 convolution, another drop-out layer, a non-linear activation, and a spatial-normalization layer, defined as follows: λc(i,j) = F (W T c Ve(:, i, j) + b c), (1) αc(i,j) = λc(i,j)∑H i ∑W j λ c (i,j) , (2) where c ∈ C and F (·) denote number of classes and the non-linear activation function, respectively. The final attention map (Am) is generated as : Am = SE SD, where denotes element-wise multiplication. A spatial average pooling is applied on Am to generate a classification score for each corresponding class and pooled-out top two class features from spatial-visual feature (Ve). The resultant 3× 2× 1024 visual embeddings are used to train our proposed TriBERT architecture, where 3 corresponds to the number of frames and 2 to the number of "objects" per frame. Keypoint (pose) Representations. Our goal is to capture human body and finger movement through keypoint representations. Therefore, we extract 26 keypoints for body joints and 21 keypoints for each hand using the AlphaPose toolbox [12]. As a result, we identify the 2D (x, y) coordinates and corresponding confidence scores of 68 body joints. Following [15], we use Graph CNN to generate semantic context comprising of those joints. Similar to prior work [52] on action recognition, we construct a Spatial-Temporal Graph Convolutional Network G = {V,E} where each node vi ∈ {V } corresponds to the body joint’s keypoint and each edge ei ∈ {E} the natural connectivity between those keypoints. We use 2D coordinates of the detected body joints with confidence scores as input to each node and construct a spatial-temporal graph by: (1) connecting human body joints within a single frame according to body structure; and (2) connecting each joint with the same joint from the consecutive frames. This way, multiple layers of spatial-temporal graph convolutions are constructed to generate higher-level features for human keypoints. We use publicly available code1 to re-train their model on our dataset and extract body joint features of size 2 × 256 × 68 before the final classification layer (corresponding to two person instances). We apply a linear layer to transform these to 3× 2× 1024 input embeddings for pose BERT where 3 corresponds to the number of visual frames and 2 to maximum number of persons per frame. Audio Representations. Consistent with prior works, we use a time-frequency representation of the input audio. We apply STFT [18] to generate the corresponding spectrogram and then transform the magnitudes of the spectrogram into the log-frequency scale for further processing. The size of the final input audio spectrogram is 1× 256× 256 and is used in two ways: (1) as an audio embedding for audio BERT; and (2) as the input audio for attention U-net for the task of sound source separation, which predicts individual audio spectrogram masks (see Figure 1). Before passing to audio BERT, we use a VGGish Network [22] to extract global features for input audio embedding. 1https://github.com/yysijie/st-gcn Tri-modal Co-attention. Recent works [3, 33] propose co-attentional transformer layers to generate effective representations of vision conditioned on language and vice versa. Multi-head attention Add & Norm Add & Norm Feed Forward Visual HV (i+1) HV (i) Multi-head attention Add & Norm Add & Norm Feed Forward Pose HP (j+1) HP (j) Multi-head attention Add & Norm Add & Norm Feed Forward Audio HA (k+1) HA (k) QV K(P,A) V(P,A) QP K(V,A) V(V,A) QA K(P,V) V(P,V) In this paper, we introduce a tri-modal coattentional layer, illustrated on the right, by extending ViLBERT’s co-attentional transformer layers [33]. Given intermediate representations for vision, pose, and audio, denoted as HV (i), HP (j), and HA(k), respectively, each stream computes individual query (Q), key (K), and value (V ) matrices. The keys and values from two modalities are then concatenated together and fed as input to the multi-head attention block of the third modality. As a result, the block generates attention features conditioned on the other two modalities. We keep the rest of the architecture, such as the feed forward layers, residual connections, etc. the same as a standard transformer block, which is then used to generate effective multi-modal features. 3.2.1 Training Tasks We pre-train TriBERT jointly on two tasks: instrument classification and sound source separation. Our proposed architecture has three separate streams and each stream performs an individual classification task. To train our TriBERT model, we use the MUSIC21 dataset [56], which contains 21 instruments. Weakly-supervised Visual and Pose Classification. Our visual segmentation network generates attention features for input video frames. We then apply a spatial pooling, and the resulting feature vector is fed into the visual BERT. We use a special <SOS> token at the beginning of the input frame sequence to represent the entire visual input. Following [33], we apply masking to approximately 15% of the input image regions (see Figure 1). The output of the visual BERT is a sequence of hidden representations hv0, hv1, ..hvN conditioned on the pose and audio modalities. We use mean pooling of all hidden representations to perform classification for the detected objects. Similarly, pose BERT generates a sequence of hidden representations hp0, hp1, ..hpN conditioned on the visual and audio modalities, and we apply classification based on the mean pooling of all hidden states. Due to the lack of instance annotations, we cannot use region/pose level supervision. Following [5], we use a weakly-supervised approach to perform region selection and classification. Audio Classification. Since we do not have a sequence of audio embeddings, we artificially create an audio sequence for computational convenience by repeating the VGGish audio feature to generate a sequence of hidden representations ha0, ha1, ..haN conditioned on the visual and pose modalities. This is done purely for engineering convenience to allow consistent use of tri-modal co-attention across modalities. We then apply audio classification on the mean feature of all hidden representations. Multi-modal Sound Source Separation. We consider sound source separation as one of our initial tasks and follow the "Mix-and-Separate" framework [11, 16, 17, 38, 55], a well-known approach to solve this problem. The goal is to mix multiple audio signals to generate an artificially complex auditory representation and then learn to separate individual sounds from the mixture. Given two input videos V1 and V2 with accompanying audio A1(t) and A2(t), we mix A1 and A2 to generate a complex audio signal mixture Am(t) = A1(t) +A2(t). Suppose V1 has two objects o1′ and o1′′ with accompanying audio a1′ and a1′′ while V2 has one object o2′ with audio a2′. The goal is to separate sounds a1′, a1′′, and a2′ from the mixture Am(t) by predicting spectrogram masks using attention U-net [37], which takes in the mixed spectrogram as input. Attention U-net contains 7 convolutions and 7 de-convolutions with skip connections. The skip connections use attention gates (AG) comprise simple additive soft attentions to highlight relevant regions of the audio spectrograms. The overhead of attention U-Net over U-Net is fairly minimal. Specifically, in terms of the number of parameters, attention U-Net contains a modest 9% more parameters as compared to U-Net and the inference speed is only 7% slower [37]. The attention U-net outputs the final magnitude of the spectrogram mask (bottom branch in Figure 1) guided by audio-visual-pose features. Following [15], we adopt a self-attention based early fusion between the bottle-neck of attention U-net with the fused features (i.e. concatenation of features) corresponding to the <SOS> tokens of three BERT streams. We combine the predicted magnitude of the spectrogram mask from attention U-net with the phase of the input spectrogram and then use inverse STFT [18] to get back the wave-form of the prediction. Training Objective. We consider weakly-supervised classification for the visual and pose modalities. Following [5], we use two data streams from the hidden state of each modality. The first stream corresponds to a class score (βclass) for each individual region to perform recognition. This is achieved by a linear layer followed by a softmax operation (see Eq. 3). The second stream computes a probability distribution (βdet) for performing a proxy detection. This is done by using another linear layer followed by another softmax operation (see Eq.4) as follows: βclass(h c)ij = eh c ij∑C t=1 e hctj , (3) βdet(hd)ij = eh d ij∑|R| t=1 e hdtj , (4) where hc ∈ RC×|R|, hd ∈ RC×|R| and C denotes the number of classes. We then aggregate the recognition and detection scores to predict the class of all image regions as follows: βR = βclass(h c) βdet(hd), where denotes an element-wise product of the two scoring metrics. Finally we apply BCE-loss [10] to train visual and pose BERT. For audio classification, we consider a classification layer to predict audio classes and similarly apply BCE-loss to train audio BERT. For the sound separation task, our goal is to learn separate spectrogram masks for each individual object. Following [55], we use a binary mask which effectively corresponds to hard attention and use per-pixel sigmoid cross entropy loss (BCE-loss) to train the network. Implementation Details. We used PyTorch to implement our network2. We consider three3 random consecutive frames with size 224× 224× 3 as our input sequence for visual and pose BERT and use pre-trained ResNet50 [20] to extract global visual features for further processing. For the pose stream, we first predict 2D coordinates of body and finger key points of each frame using AlphaPose [12] and then use graph CNN [52] to generate feature vectors for each keypoint. Similar to prior works [15, 55], we sub-sample audio signals to 11KHz to reduce the computational cost and then select approximately 6s of audio by random cropping. To follow the "Mix-and-Separate" framework [11, 16, 17, 38, 55], we mix audio inputs and generate a time-frequency audio spectrogram using STFT with a Hann window size of 1022 and a hop length of 256. We then transform the spectrogram into the logfrequency scale to obtain the final 256× 256 time-frequency representation. The transformers for visual/pose and audio have a hidden state size of 1024 and 512, respectively, with 8 attention heads. We use the Adam optimizer with an initial learning rate of 1e−5 and batch size of 12 to train the network on 4 GTX 1080 GPUs for 6k epochs. Training takes approximately 192 hours. 3.2.2 Runtime Inference We use the MUSIC21 dataset [56] to train our network on two pretraining tasks: classification and sound source separation. We can use this network directly for sound separation on MUSIC21. We also fine-tune the pre-trained TriBERT on the MUSIC dataset [55] with 11 audio classes, which is a sub-set of the MUSIC21 dataset. We follow a fine-tuning strategy where we modify the classification layer from each pre-trained stream and then train our proposed model end-to-end with a learning rate of 1e−7 for 1500 epochs while keeping the rest of the hyper-parameters the same as the initial task. 4 Experiments Datasets. We consider the MUSIC21 dataset [56], which contains 1365 untrimmed videos of musical solos and duets from 21 instrument classes for the initial training of our TriBERT architecture. For fine-tuning, we use the MUSIC dataset [55], which is a subset of MUSIC21, containing 685 untrimmed videos of musical solos and duets from 11 instrument classes. 2https://github.com/ubc-vision/TriBERT 3BERT-based architectures, including ours, require large GPU memory and longer training time. Therefore, we use only three frames to reduce computational cost, but the number of frames can be easily increased with the same architecture (if resources allow). Further, we would like to highlight that a pose feature for one frame, actually takes into account T=256 frames of poses using a Spatial-Temporal Graph Convolutional Network. Therefore long-term contextual pose information is taken into account [52]. 4.1 Experiments for Sound Separation Evaluation Metrics. We use three common metrics to quantify the performance of sound separation: Signal-to-Distortion Ratio (SDR), Signal-to-Interference Ratio (SIR), and Signal-to-Artifact Ratio (SAR). We report all of the results with the widely used mir_eval library [41]. Baselines. The MUSIC21 dataset contains 1365 untrimmed videos, but we found 314 of those to be missing. Moreover, the train/val/test split was unavailable. As a result, for fair comparison, we trained our baselines [15, 55] with the available videos using an 80/20 train/test split. We use publicly available code4 to train "Sound-of-Pixels" [55]. For "MUSIC-Gesture" [15], we re-implemented the model by extracting pose features using Graph CNN [52]. Our reproduced results are comparable with those reported5. For the MUSIC dataset, we follow the experimental protocol from [42] and consider their reported results as our baselines. Quantitative and Qualitative Results. Table 1 shows the quantitative results for the sound separation pre-training task on the MUSIC21 dataset. Here, we include the performance of our method and baselines when we use only single-source videos (solos) or multi-source (solos+duets) to train all models. Our TriBERT outperforms (10.09 vs 8.08 for single-source in SDR) baseline models in all evaluation metrics. We then fine-tune our model on the MUSIC dataset with a train/val/test split from [16] (see Table 2). Our model again outperforms all baselines in all metrics (12.34 vs 9.29 in SDR). Figure 3 illustrates the corresponding qualitative results. The 1st, 2nd, and 3rd columns show the mixed video pairs and accompanying audio mixture, respectively. Columns 4 and 5 illustrate the ground-truth spectrogram mask while columns 6/7 and 8/9 show the predicted spectrogram mask by [15] and our method, respectively. Finally, the ground truth spectrogram, predicted spectrogram by [15], and our method are illustrated in columns 10/11, 12/13, and 14/15, respectively. It is clear that TriBERT, both quantitatively and qualitatively, outperforms the state-of-the-art in sound separation. 4.2 Multi-modal Retrieval Retrieval Variants. In this experiment, we analyze the semantic alignment between the 3 modalities that TriBERT learns to encode. This is done through cross-modal retrieval, where given a single or a pair of modality embeddings, we attempt to identify the matching embedding from a different modality. We consider 5 variants: audio→ vision, vision→ audio, audio→ pose, pose→ audio, and vision+audio→ pose. Throughout this section, we refer to the embedding we have as the query 4https://github.com/hangzhaomit/Sound-of-Pixels 5The reported SIR score in [15] is 15.81, which is close to our reimplementation of their method which achieves a score of 15.27. Our reproduced SDR score is a bit lower, compared to the 10.12 reported in [15]. However, this is perhaps expected given that 23% of the dataset was missing. embedding and the embedding we want to retrieve as the result embedding. We train and evaluate on the MUSIC21 dataset, using the same 80-20 train-test split used to learn TriBERT. We consider 2 types of embeddings for the 3 modalities. First, we use the transformer-based embeddings, consisting of the concatenations of the hidden representations hv0...v3, hp0...p3, and ha0...a3 for visual, pose, and audio, respectively. Additionally, we establish a baseline by training with the embeddings used as input to the three BERT streams. This baseline can be viewed as an ablation study for the transformer layers. Retrieval Training. Similar to [33], we train using an n-way multiple-choice setting. Here, n depends on the variant of the retrieval task, where n = 4 for the vision+audio to pose variant and n = 3 for the four remaining single-modality variants. In either case, one positive pair is used and n− 1 distractors are sampled. Further details are provided in the Supplemental Materials. We use an MLP that takes as input a fusion representation of both the query and result embeddings, computed as the element-wise product of the two. The module then outputs a single logit, interpreted as a binary prediction for whether the query and result embeddings are aligned. For the vision+audio→ pose variant, an additional MLP, based on [15], is used to combine the vision and audio embeddings before the final element-wise product with the pose embedding. Additionally, since both the transformerbased and pre-transformer embeddings are not consistent in shape across the three modalities, we also use linear layers as required to transform them to a consistent one. This overall retrieval network is trained end-to-end. For each multiple choice, the network computes an alignment score, after which a softmax is applied across all n scores. We train using a cross-entropy loss for 750 epochs with a batch size of 64 using the Adam optimizer with an initial learning rate of 2e-5. Retrieval Results. Figure 4 shows the qualitative results for two variants of retrieval. Additionally, Table 3 shows quantitative results for the 5 retrieval variants using the transformer-based representation, the baseline pre-transformer representation, and also a model that simply selects randomly from the pool. We see that retrieval using the transformer-based embeddings results in significantly better performance than the pre-transformer ones. This shows that the tri-modal co-attention modules are an integral component in learning a semantically meaningful relationship between the three modalities. Notably, in Table 3, we can see that vision+audio → pose is worse than audio → pose in top-1 accuracy. The performance of the two models is not necessarily directly comparable. Specifically, there are two issues that should be considered: - The input dimensionality and number of parameters of the vision+audio retrieval model is significantly larger, with an additional MLP layer used for fusion. This means that the vision+audio model is more prone to over-fitting, exhibited in the lower performance for top-1. Note that the top-5 and top-10 performance of vision+audio→ pose is better. - The number of distractors (n− 1) is different in the two settings. For single-modality retrieval variants, we use two distractors (negative pairings); while for the two-modality variant, we use three distractors. This may also marginally affect the performance, since in the n-way classification, having more distractors puts more focus on the negatives. However, we want to stress that the goal of these experiments is not to compare which modality or combination of modalities are best for retrieval. Instead, the goal is to illustrate the effectiveness of the TriBERT representations. Each of the five retrieval models is simply an instance of a retrieval task. We can use any alternative (more sophisticated) models for retrieval here. The key observation is that in all five cases, TriBERT representations perform significantly better in retrieval compared with baseline representations (used as input to TriBERT). This is strong evidence that TriBERT representations are effective. We make no claims with regards to optimality of the retrieval formulation or objective; it is simply used as a proxy for evaluating TriBERT representations. 5 Conclusion In this paper, we introduce TriBERT, a three-stream model with tri-modal co-attention blocks to generate a generic representation for multiple audio-visual tasks. We pre-train our model on the MUSIC21 dataset and show that our model exceeds state-of-the-art for sound separation6. We also find that TriBERT learns more generic and aligned multi-modal representations, exceeding on the cross-modal audio-visual-pose retrieval task. In this work, we limit ourselves to two datasets and fundamental audio-visual tasks. In the future, we plan to consider using more datasets and expanding to a broader set of tasks (e.g., generation). The role of positional embeddings should also be explored. Acknowledgments: This work was funded in part by the Vector Institute for AI, Canada CIFAR AI Chair, NSERC Canada Research Chair (CRC) and an NSERC Discovery and Discovery Accelerator Supplement Grants. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www. vectorinstitute.ai/#partners. Additional hardware support was provided by John R. Evans Leaders Fund CFI grant and Compute Canada under the Resource Allocation Competition award.
1. What is the main contribution of the paper regarding audio-visual representation learning? 2. What are the strengths and weaknesses of the proposed TriBERT model, particularly in its architecture and performance? 3. Do you have any concerns or suggestions regarding the experiment design and validation? 4. How does the reviewer assess the clarity and readability of the paper? 5. Are there any specific questions or aspects that the reviewer would like the authors to clarify or elaborate on?
Summary Of The Paper Review
Summary Of The Paper The goal of this work is to learn human-centric audio-visual representations. The authors claim to be first in using transformer models in the context of granular audio-visual detection or segmentation tasks (e.g. sound source separation). The authors propose TriBERT, which is a three-stream model with tri-modal attention to generate generic representaitons for AV tasks. The three modalities are audio, video and pose (keypoints). The model is trained with weak supervision (video level labels). The model gives state-of-the-art results on sound source separation. Review The paper introduces BERT-like contextualized representation learning for the audio-visual domain. This is first in the context of AV source separation to my knowledge. The method appears to be effective for the task. The architecture is based on existing work (VilBERT), but adds additional components such as segmentation network, spectrogram prediction mask and pose input. The results are strong on the datasets shown. However, as the authors noted, the experiments are limited to only two datasets, one of which cannot be reproduced completely due to missing files (line 307). Ablations are also not given to justify design choices. The results would be more convincing if the results could be reproduced for other human-centric datasets such as action recognition (UCF101) or audio-visual speech recognition (LRS3). Due to the limitations in experimental validation, I recommend borderline reject. The English is fine, but the paper is not easy to follow (perhaps due to the large number of components to explain). Additional questions: Does the authors have an explanation as to why V+A>P is so much worse than A>P in top1 accuracy in Table 3? Isn't the pose just a processed form of the visual input? Is it just to make it more explicit so that it is easier for the network to process? What information is fed into the network in the case of occlusion?
NIPS
Title TriBERT: Human-centric Audio-visual Representation Learning Abstract The recent success of transformer models in language, such as BERT, has motivated the use of such architectures for multi-modal feature learning and tasks. However, most multi-modal variants (e.g., ViLBERT) have limited themselves to visuallinguistic data. Relatively few have explored its use in audio-visual modalities, and none, to our knowledge, illustrate them in the context of granular audio-visual detection or segmentation tasks such as sound source separation and localization. In this work, we introduce TriBERT – a transformer-based architecture, inspired by ViLBERT, which enables contextual feature learning across three modalities: vision, pose, and audio, with the use of flexible co-attention. The use of pose keypoints is inspired by recent works that illustrate that such representations can significantly boost performance in many audio-visual scenarios where often one or more persons are responsible for the sound explicitly (e.g., talking) or implicitly (e.g., sound produced as a function of human manipulating an object). From a technical perspective, as part of the TriBERT architecture, we introduce a learned visual tokenization scheme based on spatial attention and leverage weak-supervision to allow granular cross-modal interactions for visual and pose modalities. Further, we supplement learning with sound-source separation loss formulated across all three streams. We pre-train our model on the large MUSIC21 dataset and demonstrate improved performance in audio-visual sound source separation on that dataset as well as other datasets through fine-tuning. In addition, we show that the learned TriBERT representations are generic and significantly improve performance on other audio-visual tasks such as cross-modal audio-visual-pose retrieval by as much as 66.7% in top-1 accuracy. N/A The recent success of transformer models in language, such as BERT, has motivated the use of such architectures for multi-modal feature learning and tasks. However, most multi-modal variants (e.g., ViLBERT) have limited themselves to visuallinguistic data. Relatively few have explored its use in audio-visual modalities, and none, to our knowledge, illustrate them in the context of granular audio-visual detection or segmentation tasks such as sound source separation and localization. In this work, we introduce TriBERT – a transformer-based architecture, inspired by ViLBERT, which enables contextual feature learning across three modalities: vision, pose, and audio, with the use of flexible co-attention. The use of pose keypoints is inspired by recent works that illustrate that such representations can significantly boost performance in many audio-visual scenarios where often one or more persons are responsible for the sound explicitly (e.g., talking) or implicitly (e.g., sound produced as a function of human manipulating an object). From a technical perspective, as part of the TriBERT architecture, we introduce a learned visual tokenization scheme based on spatial attention and leverage weak-supervision to allow granular cross-modal interactions for visual and pose modalities. Further, we supplement learning with sound-source separation loss formulated across all three streams. We pre-train our model on the large MUSIC21 dataset and demonstrate improved performance in audio-visual sound source separation on that dataset as well as other datasets through fine-tuning. In addition, we show that the learned TriBERT representations are generic and significantly improve performance on other audio-visual tasks such as cross-modal audio-visual-pose retrieval by as much as 66.7% in top-1 accuracy. 1 Introduction Multi-modal audio-visual learning [57], which explores and leverages the relationship between visual and auditory modalities, has started to emerge as an important sub-field of machine learning and computer vision. Examples of typical tasks include: audio-visual separation and localization, where the goal is to segment sounds produced by individual objects in an audio and/or to localize those objects in a visual scene [15, 16, 42, 55]; and audio-visual correspondence, where the goal is often audio-visual retrieval [23, 47, 53]. Notably, some of the most recent audio-visual methods [15] leverage human pose keypoints, or landmarks, as an intermediate or contextual representation. This tends to improve the overall performance of sound separation, as pose and motion are important cues for characterising both the type of instrument being played and, potentially, over time, the rhythm of the individual piece [15]. It can also serve as an intermediate representation when generating video from acoustic signals [8, 44] for example. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Most of the existing architectures tend to extract features from the necessary modalities using pre-trained backbones (e.g., CNNs applied to video frames [55], object regions [16], and audio spectrograms; and/or graph CNN for human pose [15]) and then construct problem-specific architectures that often utilize simple late fusion for cross-modal integration in decoding (e.g., to produce spectrogram masks [15, 16, 55]). This is contrary to current trends in other multi-modal problem domains, where over the past few years, approaches have largely consolidated around generic multi-modal feature learning architectures that are task agnostic to produce contextualized feature representations and then fine-tune those representations to a variety of tasks (e.g., visual question answering (VQA) or reasoning (VCR)) and datasets. Examples of such architectures include ViLBERT [33], VL-BERT [46], and Unicoder-VL [31], all designed specifically for visual-linguistic tasks. Audio-visual representation learning has, in comparison, received much less attention. Most prior works [51] assume a single sound source per video and rely on audio-visual alignment objectives. Exceptions include [39], which relies on proposal mechanisms and multiple-instance learning [49] or co-clustering [25]. These approaches tend to integrate multi-modal features extracted using pre-trained feature extractors (e.g., CNNs) at a somewhat shallow level. The very recent variants [6, 28, 35] leverage transformers for audio-visual representation learning through simple classification [6] and self-supervised [28] or contrastive [35] learning objectives while only illustrating performance on video-level audio-visual action classification. To the best of our knowledge, no audio-visual representation learning approach to date has explored pose as one of the constituent modalities; nor has shown that feature integration and contextualization at a hierarchy of levels, as is the case for BERT-like architectures, can lead to improvements on granular audio-visual tasks such as audio-visual sound source separation. To address the aforementioned limitations, we formulate a human-centric audio-visual representation learning architecture, inspired by ViLBERT [33] and other transformer-based designs, with an explicit goal of improving the state-of-the-art in audio-visual sound source separation. Our transformer model takes three streams of information: video, audio, and (pose) keypoints and co-attends among those three modalities to arrive at enriched representations that can then be used for the final audiovisual sound separation task. We illustrate that these representations are general and also improve performance on other auxiliary tasks (e.g., forms of cross-modal audio-visual-pose retrieval). From a technical perspective, unlike ViLBERT and others, our model does not rely on global frame-wise features nor an external proposal mechanism. Instead, we leverage a learned attention to form visual tokens, akin to [42], and leverage weakly supervised objectives that avoid single sound-source assumptions for learning. In addition, we introduce spectrogram mask prediction as one of our pre-training tasks to enable the network to better learn task-specific contextualized features. Contributions: Foremost, we introduce a tri-modal VilBERT-inspired model, which we call TriBERT, that co-attends among visual, pose keypoint, and audio modalities to produce highly contextualized representations. We show that these representations, obtained by optimizing the model with respect to uni-modal (weakly-supervised) classification and sound separation pretraining objectives, produce features that improve audio-visual sound source separation at large and also work well on other downstream tasks. Further, to avoid reliance on the image proposal mechanisms, we formulate tokenization in the image stream in terms of learned attentional pooling, which is learned jointly. This alleviates the need for externally trained detection mechanisms, such as Faster R-CNN and variants. We illustrate competitive performance on a number of granular audio-visual tasks both by using the TriBERT model directly, using it as a feature extractor, or by fine-tuning it. 2 Related works Audio-visual Tasks. There exists a close relationship between visual scenes and the sounds that they produce. This relationship has been leveraged to complete various audio-visual tasks. Based on [57]’s survey of audio-visual deep learning, these tasks can be categorized into four subfields, three of which are addressed in this paper and described in the following three subsections. Audio-visual Sound Source Separation and Localization. Sound source separation and the related task of sound source localization have been studied quite extensively. Previous works studying separation, also known as the cocktail party problem [19], leverage multi-modal audio-visual information [11, 14] to help improve performance with respect to their audio-only counterparts [26, 34]. Examples include learning correlations between optical flow and masked frequencies [9, 13], using graphical models [21], detecting salient motion signals that correspond to audio events [30, 40], and extracting pose keypoints to model human movements [15]. A close connection between separation and localization has also been illustrated [40, 43, 55, 56]. For example, [16, 42] both formulate the task as one of auditory and visual co-segmentation, either with pre-trained object regions obtained by the detector [16] or directly from the image [42]. All of these approaches contain highly specialized architectures with custom fusion schemes. We aim to leverage the flexibility of transformer models to create generalized multi-modal representations that improve on audio-visual tasks. Audio-visual Representation Learning. The goal is typically to learn aligned representations. The quality of these representations has been shown to greatly impact the overall performance of tasks downstream [4]. A common strategy for representation learning is to introduce a proxy task. In the audio-visual space, past works [1, 2, 38] have trained networks by having them watch and listen to a large amount of unlabeled videos containing both positive samples of matching audio and visual pairs and negative samples of mismatched pairs; the proxy task is binary classification of whether or not the audio and visual match each other. Other proxy tasks include determining whether or not an audio-visual pair is time synchronized [27]; and [29] uses a classification task to identify the correct visual clip or audio stream from a set with negative samples. However, these works rely on the assumption that only one main sound source occurs at a time and everything else is background noise. Our model uses a weakly supervised proxy objective to learn representations for multiple sources of sound (two in experiments) occurring simultaneously and also learns to incorporate pose features. Audio-visual Correspondence Learning. One of the fundamental tasks in correspondence learning related to our work is cross-modality retrieval. Most prior works focus on audio-visual retrieval [24, 36, 48] and propose learning a joint embedding space where both modalities can be mapped to. In this space, semantically related embeddings are close to each other and thus retrieval can be performed by selecting the closest embedding to the query from the alternate modality. In our work, we demonstrate that enhanced feature representations obtained by our pretrained model capture aligned semantics and lead to much better cross-modal retrieval than baseline representations. Visiolinguistic Representation Learning. Our model is inspired by the recent successes of visiolinguistic representations. Most such approaches leverage a combination of uni-modal and cross-modal transformer modules to pre-train generic visiolinguistic representations on masked language and/or multi-modal alignment tasks. For example, [33] proposes separate streams for each modality that communicate with each other through co-attention, while [46] uses a single-stream model that takes both visual and linguistic embeddings as input. In our work, we also leverage co-attention modules to learn joint representations between audio, pose, and vision modalities. However, in addition to extending co-attention, we also focus on reformulating image tokenization and demonstrate the ability to learn with weakly-supervised classification objectives as opposed to masked token predictions. 3 Approach We introduce TriBERT, a network that learns a joint representation of three modalities: vision, pose, and audio. We briefly review ViLBERT, the architecture that inspired TriBERT, in Section 3.1. We then describe our TriBERT architecture in Section 3.2, including pretraining tasks and objectives. 3.1 Reviewing Vision-and-Language BERT (ViLBERT) Motivated by the recent success of the BERT architecture for transfer learning in language modeling, Lu et al. [33] proposed ViLBERT to represent text and visual content jointly. ViLBERT is a twostream model for image regions and text segments. Each stream is similar to the BERT architecture, containing a series of transformer blocks (TRM) [50]. Given an image I with corresponding regions-of-interest (RoIs) or bounding boxes v0, v1, ...vN and an input sentence S with word tokens w0, w1, ...wT , the final output representations are hv0, hv1, ..., hvN and hw0, hw1, ..., hwT for the visual and linguistic features, respectively. To exchange information between the two modalities, the authors introduced a co-attentional transformer layer which computes query (Q), key (K), and value (V ) pairs like a standard transformer block. The keys and values from each modality are then fed to the multi-headed attention block of the other modality. The attention block in each stream generates attention-pooled features conditioned on the other modality and outputs a multi-modal joint representation which outperforms single-stream models across multiple vision-and-language tasks. 3.2 TriBERT Architecture The architecture of our proposed TriBERT network is illustrated in Figure 1. Inspired by the recent success of ViLBERT in the vision-and-language domain, we modify its architecture to a three-stream network for vision, pose, and audio. Similar to ViLBERT [33], we use a bi-directional Transformer encoder [50] as the backbone network. However, TriBERT also introduces integral components that differentiate its architectural design. First, instead of using bounding box visual features generated by a pre-trained object detector [33] or CNN feature columns [7], TriBERT uses a jointly trained weakly supervised visual segmentation network. Our end-to-end segmentation network takes a sequence of consecutive frames to detect and segment objects, and the corresponding features are pooled and fed as tokens to the visual stream. Second, the pose tokens are characterized by per-person keypoints encoded using a Graph CNN, and the audio token is produced by the VGGish Network [22] applied to an audio spectogram. All three types of tokens form the input to TriBERT, which refines them using tri-modal co-attention to arrive at the final multi-modal representations. Training TriBERT requires the definition of proxy/pretraining tasks and the corresponding losses (see Section 3.2.1). Specifically, while we adopt token masking used in ViLBERT and others, we are unable to define classification targets per token in our visual and pose streams. This is because we only assume per-video labels (e.g., of instruments played) and no access to how those map to attended sounding regions or person instances involved. To address this, we introduce weakly-supervised classification losses for those two streams. Since only one global audio representation is used, this is unnecessary in the audio stream and standard cross-entropy classification can be employed. Finally, motivated by recent works that show that multi-task pretraining is beneficial for ViLBERT [32], we introduce an additional spectrogram mask prediction pretraining task which predicts spectrogram masks for each individual audio source from the input spectrogram (bottom block, Figure 1). Visual Representations. Unlike [33], we consider input video frames instead of detected object/bounding box features as our visual input and propose an end-to-end approach to detect and segment objects from each individual frame. Figure 2 illustrates our visual segmentation network which takes in RGB frames as input. To extract global features, we use ResNet50 [20] as the backbone network followed by a 3 × 3 convolution to generate H × W visual spatial features which are then fed into the segmentation network. Following [54], we use a decou- pled spatial neural attention structure to detect and localize discriminative objects simultaneously. The attention network has two branches: (1) Expansive attention detector, which aims to detect object regions and generate the expansive attention map SE ∈ RC×H×W (top branch of Figure 2); and (2) Discriminative attention detector, which aims to predict discriminative regions and generate the discriminative attention map SD ∈ RC×H×W (bottom branch of Figure 2). The expansive attention detector contains a drop-out layer followed by a 1 × 1 convolution, another drop-out layer, a non-linear activation, and a spatial-normalization layer, defined as follows: λc(i,j) = F (W T c Ve(:, i, j) + b c), (1) αc(i,j) = λc(i,j)∑H i ∑W j λ c (i,j) , (2) where c ∈ C and F (·) denote number of classes and the non-linear activation function, respectively. The final attention map (Am) is generated as : Am = SE SD, where denotes element-wise multiplication. A spatial average pooling is applied on Am to generate a classification score for each corresponding class and pooled-out top two class features from spatial-visual feature (Ve). The resultant 3× 2× 1024 visual embeddings are used to train our proposed TriBERT architecture, where 3 corresponds to the number of frames and 2 to the number of "objects" per frame. Keypoint (pose) Representations. Our goal is to capture human body and finger movement through keypoint representations. Therefore, we extract 26 keypoints for body joints and 21 keypoints for each hand using the AlphaPose toolbox [12]. As a result, we identify the 2D (x, y) coordinates and corresponding confidence scores of 68 body joints. Following [15], we use Graph CNN to generate semantic context comprising of those joints. Similar to prior work [52] on action recognition, we construct a Spatial-Temporal Graph Convolutional Network G = {V,E} where each node vi ∈ {V } corresponds to the body joint’s keypoint and each edge ei ∈ {E} the natural connectivity between those keypoints. We use 2D coordinates of the detected body joints with confidence scores as input to each node and construct a spatial-temporal graph by: (1) connecting human body joints within a single frame according to body structure; and (2) connecting each joint with the same joint from the consecutive frames. This way, multiple layers of spatial-temporal graph convolutions are constructed to generate higher-level features for human keypoints. We use publicly available code1 to re-train their model on our dataset and extract body joint features of size 2 × 256 × 68 before the final classification layer (corresponding to two person instances). We apply a linear layer to transform these to 3× 2× 1024 input embeddings for pose BERT where 3 corresponds to the number of visual frames and 2 to maximum number of persons per frame. Audio Representations. Consistent with prior works, we use a time-frequency representation of the input audio. We apply STFT [18] to generate the corresponding spectrogram and then transform the magnitudes of the spectrogram into the log-frequency scale for further processing. The size of the final input audio spectrogram is 1× 256× 256 and is used in two ways: (1) as an audio embedding for audio BERT; and (2) as the input audio for attention U-net for the task of sound source separation, which predicts individual audio spectrogram masks (see Figure 1). Before passing to audio BERT, we use a VGGish Network [22] to extract global features for input audio embedding. 1https://github.com/yysijie/st-gcn Tri-modal Co-attention. Recent works [3, 33] propose co-attentional transformer layers to generate effective representations of vision conditioned on language and vice versa. Multi-head attention Add & Norm Add & Norm Feed Forward Visual HV (i+1) HV (i) Multi-head attention Add & Norm Add & Norm Feed Forward Pose HP (j+1) HP (j) Multi-head attention Add & Norm Add & Norm Feed Forward Audio HA (k+1) HA (k) QV K(P,A) V(P,A) QP K(V,A) V(V,A) QA K(P,V) V(P,V) In this paper, we introduce a tri-modal coattentional layer, illustrated on the right, by extending ViLBERT’s co-attentional transformer layers [33]. Given intermediate representations for vision, pose, and audio, denoted as HV (i), HP (j), and HA(k), respectively, each stream computes individual query (Q), key (K), and value (V ) matrices. The keys and values from two modalities are then concatenated together and fed as input to the multi-head attention block of the third modality. As a result, the block generates attention features conditioned on the other two modalities. We keep the rest of the architecture, such as the feed forward layers, residual connections, etc. the same as a standard transformer block, which is then used to generate effective multi-modal features. 3.2.1 Training Tasks We pre-train TriBERT jointly on two tasks: instrument classification and sound source separation. Our proposed architecture has three separate streams and each stream performs an individual classification task. To train our TriBERT model, we use the MUSIC21 dataset [56], which contains 21 instruments. Weakly-supervised Visual and Pose Classification. Our visual segmentation network generates attention features for input video frames. We then apply a spatial pooling, and the resulting feature vector is fed into the visual BERT. We use a special <SOS> token at the beginning of the input frame sequence to represent the entire visual input. Following [33], we apply masking to approximately 15% of the input image regions (see Figure 1). The output of the visual BERT is a sequence of hidden representations hv0, hv1, ..hvN conditioned on the pose and audio modalities. We use mean pooling of all hidden representations to perform classification for the detected objects. Similarly, pose BERT generates a sequence of hidden representations hp0, hp1, ..hpN conditioned on the visual and audio modalities, and we apply classification based on the mean pooling of all hidden states. Due to the lack of instance annotations, we cannot use region/pose level supervision. Following [5], we use a weakly-supervised approach to perform region selection and classification. Audio Classification. Since we do not have a sequence of audio embeddings, we artificially create an audio sequence for computational convenience by repeating the VGGish audio feature to generate a sequence of hidden representations ha0, ha1, ..haN conditioned on the visual and pose modalities. This is done purely for engineering convenience to allow consistent use of tri-modal co-attention across modalities. We then apply audio classification on the mean feature of all hidden representations. Multi-modal Sound Source Separation. We consider sound source separation as one of our initial tasks and follow the "Mix-and-Separate" framework [11, 16, 17, 38, 55], a well-known approach to solve this problem. The goal is to mix multiple audio signals to generate an artificially complex auditory representation and then learn to separate individual sounds from the mixture. Given two input videos V1 and V2 with accompanying audio A1(t) and A2(t), we mix A1 and A2 to generate a complex audio signal mixture Am(t) = A1(t) +A2(t). Suppose V1 has two objects o1′ and o1′′ with accompanying audio a1′ and a1′′ while V2 has one object o2′ with audio a2′. The goal is to separate sounds a1′, a1′′, and a2′ from the mixture Am(t) by predicting spectrogram masks using attention U-net [37], which takes in the mixed spectrogram as input. Attention U-net contains 7 convolutions and 7 de-convolutions with skip connections. The skip connections use attention gates (AG) comprise simple additive soft attentions to highlight relevant regions of the audio spectrograms. The overhead of attention U-Net over U-Net is fairly minimal. Specifically, in terms of the number of parameters, attention U-Net contains a modest 9% more parameters as compared to U-Net and the inference speed is only 7% slower [37]. The attention U-net outputs the final magnitude of the spectrogram mask (bottom branch in Figure 1) guided by audio-visual-pose features. Following [15], we adopt a self-attention based early fusion between the bottle-neck of attention U-net with the fused features (i.e. concatenation of features) corresponding to the <SOS> tokens of three BERT streams. We combine the predicted magnitude of the spectrogram mask from attention U-net with the phase of the input spectrogram and then use inverse STFT [18] to get back the wave-form of the prediction. Training Objective. We consider weakly-supervised classification for the visual and pose modalities. Following [5], we use two data streams from the hidden state of each modality. The first stream corresponds to a class score (βclass) for each individual region to perform recognition. This is achieved by a linear layer followed by a softmax operation (see Eq. 3). The second stream computes a probability distribution (βdet) for performing a proxy detection. This is done by using another linear layer followed by another softmax operation (see Eq.4) as follows: βclass(h c)ij = eh c ij∑C t=1 e hctj , (3) βdet(hd)ij = eh d ij∑|R| t=1 e hdtj , (4) where hc ∈ RC×|R|, hd ∈ RC×|R| and C denotes the number of classes. We then aggregate the recognition and detection scores to predict the class of all image regions as follows: βR = βclass(h c) βdet(hd), where denotes an element-wise product of the two scoring metrics. Finally we apply BCE-loss [10] to train visual and pose BERT. For audio classification, we consider a classification layer to predict audio classes and similarly apply BCE-loss to train audio BERT. For the sound separation task, our goal is to learn separate spectrogram masks for each individual object. Following [55], we use a binary mask which effectively corresponds to hard attention and use per-pixel sigmoid cross entropy loss (BCE-loss) to train the network. Implementation Details. We used PyTorch to implement our network2. We consider three3 random consecutive frames with size 224× 224× 3 as our input sequence for visual and pose BERT and use pre-trained ResNet50 [20] to extract global visual features for further processing. For the pose stream, we first predict 2D coordinates of body and finger key points of each frame using AlphaPose [12] and then use graph CNN [52] to generate feature vectors for each keypoint. Similar to prior works [15, 55], we sub-sample audio signals to 11KHz to reduce the computational cost and then select approximately 6s of audio by random cropping. To follow the "Mix-and-Separate" framework [11, 16, 17, 38, 55], we mix audio inputs and generate a time-frequency audio spectrogram using STFT with a Hann window size of 1022 and a hop length of 256. We then transform the spectrogram into the logfrequency scale to obtain the final 256× 256 time-frequency representation. The transformers for visual/pose and audio have a hidden state size of 1024 and 512, respectively, with 8 attention heads. We use the Adam optimizer with an initial learning rate of 1e−5 and batch size of 12 to train the network on 4 GTX 1080 GPUs for 6k epochs. Training takes approximately 192 hours. 3.2.2 Runtime Inference We use the MUSIC21 dataset [56] to train our network on two pretraining tasks: classification and sound source separation. We can use this network directly for sound separation on MUSIC21. We also fine-tune the pre-trained TriBERT on the MUSIC dataset [55] with 11 audio classes, which is a sub-set of the MUSIC21 dataset. We follow a fine-tuning strategy where we modify the classification layer from each pre-trained stream and then train our proposed model end-to-end with a learning rate of 1e−7 for 1500 epochs while keeping the rest of the hyper-parameters the same as the initial task. 4 Experiments Datasets. We consider the MUSIC21 dataset [56], which contains 1365 untrimmed videos of musical solos and duets from 21 instrument classes for the initial training of our TriBERT architecture. For fine-tuning, we use the MUSIC dataset [55], which is a subset of MUSIC21, containing 685 untrimmed videos of musical solos and duets from 11 instrument classes. 2https://github.com/ubc-vision/TriBERT 3BERT-based architectures, including ours, require large GPU memory and longer training time. Therefore, we use only three frames to reduce computational cost, but the number of frames can be easily increased with the same architecture (if resources allow). Further, we would like to highlight that a pose feature for one frame, actually takes into account T=256 frames of poses using a Spatial-Temporal Graph Convolutional Network. Therefore long-term contextual pose information is taken into account [52]. 4.1 Experiments for Sound Separation Evaluation Metrics. We use three common metrics to quantify the performance of sound separation: Signal-to-Distortion Ratio (SDR), Signal-to-Interference Ratio (SIR), and Signal-to-Artifact Ratio (SAR). We report all of the results with the widely used mir_eval library [41]. Baselines. The MUSIC21 dataset contains 1365 untrimmed videos, but we found 314 of those to be missing. Moreover, the train/val/test split was unavailable. As a result, for fair comparison, we trained our baselines [15, 55] with the available videos using an 80/20 train/test split. We use publicly available code4 to train "Sound-of-Pixels" [55]. For "MUSIC-Gesture" [15], we re-implemented the model by extracting pose features using Graph CNN [52]. Our reproduced results are comparable with those reported5. For the MUSIC dataset, we follow the experimental protocol from [42] and consider their reported results as our baselines. Quantitative and Qualitative Results. Table 1 shows the quantitative results for the sound separation pre-training task on the MUSIC21 dataset. Here, we include the performance of our method and baselines when we use only single-source videos (solos) or multi-source (solos+duets) to train all models. Our TriBERT outperforms (10.09 vs 8.08 for single-source in SDR) baseline models in all evaluation metrics. We then fine-tune our model on the MUSIC dataset with a train/val/test split from [16] (see Table 2). Our model again outperforms all baselines in all metrics (12.34 vs 9.29 in SDR). Figure 3 illustrates the corresponding qualitative results. The 1st, 2nd, and 3rd columns show the mixed video pairs and accompanying audio mixture, respectively. Columns 4 and 5 illustrate the ground-truth spectrogram mask while columns 6/7 and 8/9 show the predicted spectrogram mask by [15] and our method, respectively. Finally, the ground truth spectrogram, predicted spectrogram by [15], and our method are illustrated in columns 10/11, 12/13, and 14/15, respectively. It is clear that TriBERT, both quantitatively and qualitatively, outperforms the state-of-the-art in sound separation. 4.2 Multi-modal Retrieval Retrieval Variants. In this experiment, we analyze the semantic alignment between the 3 modalities that TriBERT learns to encode. This is done through cross-modal retrieval, where given a single or a pair of modality embeddings, we attempt to identify the matching embedding from a different modality. We consider 5 variants: audio→ vision, vision→ audio, audio→ pose, pose→ audio, and vision+audio→ pose. Throughout this section, we refer to the embedding we have as the query 4https://github.com/hangzhaomit/Sound-of-Pixels 5The reported SIR score in [15] is 15.81, which is close to our reimplementation of their method which achieves a score of 15.27. Our reproduced SDR score is a bit lower, compared to the 10.12 reported in [15]. However, this is perhaps expected given that 23% of the dataset was missing. embedding and the embedding we want to retrieve as the result embedding. We train and evaluate on the MUSIC21 dataset, using the same 80-20 train-test split used to learn TriBERT. We consider 2 types of embeddings for the 3 modalities. First, we use the transformer-based embeddings, consisting of the concatenations of the hidden representations hv0...v3, hp0...p3, and ha0...a3 for visual, pose, and audio, respectively. Additionally, we establish a baseline by training with the embeddings used as input to the three BERT streams. This baseline can be viewed as an ablation study for the transformer layers. Retrieval Training. Similar to [33], we train using an n-way multiple-choice setting. Here, n depends on the variant of the retrieval task, where n = 4 for the vision+audio to pose variant and n = 3 for the four remaining single-modality variants. In either case, one positive pair is used and n− 1 distractors are sampled. Further details are provided in the Supplemental Materials. We use an MLP that takes as input a fusion representation of both the query and result embeddings, computed as the element-wise product of the two. The module then outputs a single logit, interpreted as a binary prediction for whether the query and result embeddings are aligned. For the vision+audio→ pose variant, an additional MLP, based on [15], is used to combine the vision and audio embeddings before the final element-wise product with the pose embedding. Additionally, since both the transformerbased and pre-transformer embeddings are not consistent in shape across the three modalities, we also use linear layers as required to transform them to a consistent one. This overall retrieval network is trained end-to-end. For each multiple choice, the network computes an alignment score, after which a softmax is applied across all n scores. We train using a cross-entropy loss for 750 epochs with a batch size of 64 using the Adam optimizer with an initial learning rate of 2e-5. Retrieval Results. Figure 4 shows the qualitative results for two variants of retrieval. Additionally, Table 3 shows quantitative results for the 5 retrieval variants using the transformer-based representation, the baseline pre-transformer representation, and also a model that simply selects randomly from the pool. We see that retrieval using the transformer-based embeddings results in significantly better performance than the pre-transformer ones. This shows that the tri-modal co-attention modules are an integral component in learning a semantically meaningful relationship between the three modalities. Notably, in Table 3, we can see that vision+audio → pose is worse than audio → pose in top-1 accuracy. The performance of the two models is not necessarily directly comparable. Specifically, there are two issues that should be considered: - The input dimensionality and number of parameters of the vision+audio retrieval model is significantly larger, with an additional MLP layer used for fusion. This means that the vision+audio model is more prone to over-fitting, exhibited in the lower performance for top-1. Note that the top-5 and top-10 performance of vision+audio→ pose is better. - The number of distractors (n− 1) is different in the two settings. For single-modality retrieval variants, we use two distractors (negative pairings); while for the two-modality variant, we use three distractors. This may also marginally affect the performance, since in the n-way classification, having more distractors puts more focus on the negatives. However, we want to stress that the goal of these experiments is not to compare which modality or combination of modalities are best for retrieval. Instead, the goal is to illustrate the effectiveness of the TriBERT representations. Each of the five retrieval models is simply an instance of a retrieval task. We can use any alternative (more sophisticated) models for retrieval here. The key observation is that in all five cases, TriBERT representations perform significantly better in retrieval compared with baseline representations (used as input to TriBERT). This is strong evidence that TriBERT representations are effective. We make no claims with regards to optimality of the retrieval formulation or objective; it is simply used as a proxy for evaluating TriBERT representations. 5 Conclusion In this paper, we introduce TriBERT, a three-stream model with tri-modal co-attention blocks to generate a generic representation for multiple audio-visual tasks. We pre-train our model on the MUSIC21 dataset and show that our model exceeds state-of-the-art for sound separation6. We also find that TriBERT learns more generic and aligned multi-modal representations, exceeding on the cross-modal audio-visual-pose retrieval task. In this work, we limit ourselves to two datasets and fundamental audio-visual tasks. In the future, we plan to consider using more datasets and expanding to a broader set of tasks (e.g., generation). The role of positional embeddings should also be explored. Acknowledgments: This work was funded in part by the Vector Institute for AI, Canada CIFAR AI Chair, NSERC Canada Research Chair (CRC) and an NSERC Discovery and Discovery Accelerator Supplement Grants. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www. vectorinstitute.ai/#partners. Additional hardware support was provided by John R. Evans Leaders Fund CFI grant and Compute Canada under the Resource Allocation Competition award.
1. What is the main contribution of the paper regarding multi-modal representations? 2. How does the proposed approach differ from other related works, particularly in terms of feature extraction and training tasks? 3. What are the strengths and limitations of the paper regarding its originality and significance? 4. Are there any concerns or suggestions regarding the experimental design and results, such as ablation studies or qualitative analysis? 5. Can the authors provide further explanations or justifications for certain design choices or unexpected findings?
Summary Of The Paper Review
Summary Of The Paper The paper presents TriBERT a BERT-like model for learning multi-model representations of three modalities simultaneously, namely: vision, pose and audio. The architecture of TriBERT takes inspiration from the visual-linguistic model VilBERT (notably known for the applications in VQA) in terms of its Transformer architecture and of its multi-modal attention (the latter is extended in the paper to 3 modalities instead of 2). TriBERT training comprises four losses to optimize. In particular, after the projection of the 3 unimodal feature sequences into the joint feature space by the multimodal transformer, there are 3 sequences of features, namely: <h_v>, <h_p> and <h_a> for respectively enriched visual, pose and audio features. Then, the <h_v> and <h_p> features are (independently) trained to localize and predict the sound source (the musical instrument(s) which is (are) played in the recording) using 2 (independent) BCE losses. It should be noted that the localization part is learnt in a weakly-supervised manner as only global instrument classification annotations are available in the training data. The <h_a> audio features are trained to classify the sound source (the 3rd BCE loss). Finally, the 4th supervision task is the multi-modal sound source separation one. The original audio spectrogram is mixed with other audio spectrogram(s) and the task consists in predicting a spectrogram mask which would separate the original audio from the others (the 4th BCE loss). Following [37], for this last task, a U-Net neural architecture is used. U-Net takes the mixed audio spectrogram at its input and the fusion of <h_v>, <h_p> and <h_a> at its bottleneck representation to predict the desired spectrogram mask. Once trained, TriBERT is evaluated on the MUSIC21 dataset for the sound separation task, and it also fine-tuned and evaluated for the multi-modal retrieval downstream task. Review Originality The paper has a primary original contribution: in particular, to the best of my knowledge, it is the 1st study to report a conclusive approach for training of a tri-modal BERT-like model on visual, audio and pose modalities with impressive results on such a complicate downstream task as multi-modal retrieval. The paper has also two other minor contributions. Firstly, it demonstrates that unimodal visual features can be learnt from scratch in an end-2-end manner (while a common approach is to extract object features with a pretrained object detector such as Faster-RCNN as it is done in VilBERT, for example), and secondly, they illustrate that sound source prediction, separation and localization are powerful supervision tasks which are enough to train a complex Transformer architecture with relatively little annotations (only the tags of played instruments). Quality I believe that the authors have properly cited and compared their approach to all significant related work, most notable to [15] and [55]. Moreover, the approach of [15] has been re-implemented and retrained in order to use the same unimodal features for the pose modality (the ones learnt with a graph CNN on keypoints extracted with the AlphaPose toolbox). All in all, in my opinion, the paper is technically sound. Clarity The paper is very well written and easy to follow. The authors commit to publishing the source code upon the paper acceptance. Typos Line 93: “of of” - > “of”. Significance To my knowledge, this is the 1st paper which has successfully managed to learn joint representations of vision, pose and audio inputs with a single BERT-like architecture. Moreover, only very shallow annotations (the musical instrument type) of the training dataset (MUSIC21) have been used for training. Given the convincing experimental results on the sound separation task where the strong Sound-of-Pixels [55] and MUSIC-Gesture [15] baselines have been largely outperformed, as well as the multi-modal retrieval results reported in Table 3 which demonstrate that TriBERT managed to learn a joint feature space for the three modalities, I believe that this paper will be useful for the community as a 1st example of what might become a new standard in audio-visual processing (as it was the case for VilBERT in the visual-linguistic domain). Therefore, I am rather inclined to recommend the acceptance of this paper. At the same time, I would like to highlight the following limitations and suggestions / questions for the authors: The experiments are done only on a single dataset (MUSIC is actually a subset of MUSIC21). The results would have been even more convincing, had there been another dataset for evaluation. I would be interested to see an ablation on the contributions of each modality to the final sound separation results reported in Tables 1 and 2. Thus, I believe that the pose and acoustic modalities are much more important than the visual one for this task. Does the visual modality really contribute in the final score? It would also be informative to see some qualitative results on the sound localization given the fact that this task has been learnt in a weakly supervised manner (without explicit annotations on the localization). Finally, how can you explain that the retrieval Top-1 accuracy for the pose modality given only audio modality is higher than given both audio and visual modalities (Table 3)?
NIPS
Title CUP: Critic-Guided Policy Reuse Abstract The ability to reuse previous policies is an important aspect of human intelligence. To achieve efficient policy reuse, a Deep Reinforcement Learning (DRL) agent needs to decide when to reuse and which source policies to reuse. Previous methods solve this problem by introducing extra components to the underlying algorithm, such as hierarchical high-level policies over source policies, or estimations of source policies’ value functions on the target task. However, training these components induces either optimization non-stationarity or heavy sampling cost, significantly impairing the effectiveness of transfer. To tackle this problem, we propose a novel policy reuse algorithm called Critic-gUided Policy reuse (CUP), which avoids training any extra components and efficiently reuses source policies. CUP utilizes the critic, a common component in actor-critic methods, to evaluate and choose source policies. At each state, CUP chooses the source policy that has the largest one-step improvement over the current target policy, and forms a guidance policy. The guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy. Then the target policy is regularized to imitate the guidance policy to perform efficient policy search. Empirical results demonstrate that CUP achieves efficient transfer and significantly outperforms baseline algorithms. 1 Introduction Human intelligence can solve new tasks quickly by reusing previous policies (Guberman & Greenfield, 1991). Despite remarkable success, current Deep Reinforcement Learning (DRL) agents lack this knowledge transfer ability (Silver et al., 2017; Vinyals et al., 2019; Ceron & Castro, 2021), leading to enormous computation and sampling cost. As a consequence, a large number of works have been studying the problem of policy reuse in DRL, i.e., how to efficiently reuse source policies to speed up target policy learning (Fernández & Veloso, 2006; Barreto et al., 2018; Li et al., 2019; Yang et al., 2020b). A fundamental challenge towards policy reuse is: how does an agent with access to multiple source policies decide when and where to use them (Fernández & Veloso, 2006; Kurenkov et al., 2020; Cheng et al., 2020)? Previous methods solve this problem by introducing additional components to the underlying DRL algorithm, such as hierarchical high-level policies over source policies (Li et al., 2018, 2019; Yang et al., 2020b), or estimations of source policies’ value functions on the target task (Barreto et al., 2017, 2018; Cheng et al., 2020). However, training these components significantly impairs the effectiveness of transfer, as hierarchical structures induce optimization non-stationarity (Pateria et al., 2021), and estimating the value functions for every source policy is computationally expensive and with high sampling cost. Thus, the objective of this study is to address the question: 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Can we achieve efficient transfer without training additional components? Notice that actor-critic methods (Lillicrap et al., 2016; Fujimoto et al., 2018; Haarnoja et al., 2018) learn a critic that approximates the actor’s Q function and serves as a natural way to evaluate policies. Based on this observation, we propose a novel policy reuse algorithm that utilizes the critic to choose source policies. The proposed algorithm, called Critic-gUided Policy reuse (CUP), avoids training any additional components and achieves efficient transfer. At each state, CUP chooses the source policy that has the largest one-step improvement over the current target policy, thus forming a guidance policy. Then CUP guides learning by regularizing the target policy to imitate the guidance policy. This approach has the following advantages. First, the one-step improvement can be estimated simply by querying the critic, and no additional components are needed to be trained. Secondly, the guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy, which ensures that CUP can reuse the source policies to improve the current target policy. Finally, CUP is conceptually simple and easy to implement, introducing very few hyper-parameters to the underlying algorithm. We evaluate CUP on Meta-World (Yu et al., 2020), a popular reinforcement learning benchmark composed of multiple robot arm manipulation tasks. Empirical results demonstrate that CUP achieves efficient transfer and significantly outperforms baseline algorithms. 2 Preliminaries Reinforcement learning (RL) deals with Markov Decision Processes (MDPs). A MDP can be modelled by a tuple (S,A, r, p, γ), with state space S, action space A, reward function r(s, a), transition function p(s′|s, a), and discount factor γ (Sutton & Barto, 2018). In this study, we focus on MDPs with continuous action spaces. RL’s objective is to find a policy π(a|s) that maximizes the cumulative discounted return R(π) = Eπ [ ∑∞ t=0 γ tr(st, at)]. While CUP is generally applicable to a wide range of actor-critic algorithms, in this work we use SAC (Haarnoja et al., 2018) as the underlying algorithm. The soft Q function and soft V function (Haarnoja et al., 2017) of a policy π are defined as: Qπ(s, a) = r(s, a) + γEs′∼p(·|s,a) [Vπ(s)] (1) Vπ(s) = Ea∼π(·|s) [Qπ(s, a)− α log π(a|s)] , (2) where α > 0 is the entropy weight. SAC’s loss functions are defined as: Lcritic(Qθ) = E(s,a,r,s′)∼D [ Qθ(s, a)− (r + γVθ(s ′)) ]2 Lactor(πϕ) = Es∼D [ Ea∼πϕ(·|s) [α log πϕ(a|s)−Qθ(s, a)] ] Lentropy(α) = Es∼D [ Ea∼πϕ(·|s) [ −α log πϕ(a|s)− αH ]] , (3) where D is the replay buffer, H is a hyper-parameter representing the target entropy, θ and ϕ are network parameters, θ is target network’s parameters, and Vθ(s) = Ea∼π(a|s)[Qθ(s, a) − α log π(a|s)] is the target soft value function. We define the soft expected advantage of action probability distribution πi(·|s) over policy πj at state s as: EAπj (s, πi) = Ea∼πi(·|s) [ Qπj (s, a)− α log πi(a|s)− Vπj (s) ] . (4) EAπj (s, πi) measures the one-step performance improvement brought by following πi instead of πj at state s, and following πj afterwards. The field of policy reuse focuses on solving a target MDP M efficiently by transferring knowledge from a set of source policies {π1, π2, ..., πn}. We denote the target policy learned on M at iteration t as πttar, and its corresponding soft Q function as Qπttar . In this work, we assume that the source policies and the target policy share the same state and action spaces. 3 Critic-Guided Policy Reuse This section presents CUP, an efficient policy reuse algorithm that does not require training any additional components. CUP is built upon actor-critic methods. In each iteration, CUP uses the critic to form a guidance policy from the source policies and the current target policy. Then CUP guides policy search by regularizing the target policy to imitate the guidance policy. Section 3.1 presents how to form a guidance policy by aggregating source policies through the critic, and proves that the guidance policy is guaranteed to be a monotonic improvement over the current target policy. We also prove that the target policy is theoretically guaranteed to improve by imitating the guidance policy. Section 3.2 presents the overall framework of CUP. 3.1 Critic-Guided Source Policy Aggregation CUP utilizes action probabilities proposed by source policies to improve the current target policy, and forms a guidance policy. At iteration t of target policy learning, for each state s, the agent has access to a set of candidate action probability distributions proposed by the n source policies and the current target policy: Πst = {π1(·|s), π2(·|s), ..., πn(·|s), πttar(·|s)}. The guidance policy πtg can be formed by combining the action probability distributions that have the largest soft expected advantage over πttar at each state s: πtg(·|s) = argmax π(·|s)∈Πst EAπttar (s, π) = argmax π(·|s)∈Πst Ea∼π(·|s) [ Qπttar (s, a)− α log π(a|s) ] for all s ∈ S. (5) The second equation holds as adding Vπttar (s) to all soft expected advantages does not affect the result of the argmax operator. Eq. 5 implies that at each state, we can choose which source policy to follow simply by querying its expected soft Q value under πttar. Noticing that with function approximation, the exact soft Q value cannot be acquired. The following theorem enables us to form the guidance policy with an approximated soft Q function, and guarantees that the guidance policy is a monotonic improvement over the current target policy. Theorem 1 Let Q̃πttar be an approximation of Qπttar such that |Q̃πttar (s, a)−Qπttar (s, a)| ≤ ϵ for all s ∈ S, a ∈ A. (6) Define π̃tg(·|s) = argmax π(·|s)∈Πst Ea∼π(·|s) [ Q̃πttar (s, a)− α log π(a|s) ] for all s ∈ S. (7) Then, V π̃tg (s) ≥ Vπttar (s)− 2ϵ 1− γ for all s ∈ S. (8) Theorem 1 provides a way to choose source policies using an approximation of the current target policy’s soft Q value. As SAC learns such an approximation, the guidance policy can be formed without training any additional components. The next question is, how to incorporate the guidance policy π̃tg into target policy learning? The following theorem demonstrates that policy improvement can be guaranteed if the target policy is optimized to stay close to the guidance policy. Theorem 2 If DKL ( πt+1tar (·|s)||π̃tg(·|s) ) ≤ δ for all s ∈ S, (9) then Vπt+1tar (s) ≥ Vπttar (s)− √ 2 ln 2δ(R̃max + αHt+1max) (1− γ)2 − 2ϵ+ αH̃max 1− γ for all s ∈ S, (10) where R̃max = max s,a |r(s, a)| is the largest possible absolute value of the reward, Ht+1max = max s H(πt+1tar (·|s)) is the largest entropy of πt+1tar , and H̃max = max s ∣∣H(πttar(·|s))−H(πt+1tar (·|s))∣∣ is the largest possible absolute difference of the policy entropy. According to Theorem 2, the target policy can be improved by minimizing the KL divergence between the target policy and the guidance policy. Thus we can use the KL divergence as an auxiliary loss to guide target policy learning. Proofs of this section are deferred to Appendix B.1 and Appendix B.2. Theorem 1 and Theorem 2 can be extended to common “hard” value functions (deferred to Appendix B.3), so CUP is also applicable to actor-critic algorithms that uses “hard” Bellman updates, such as A3C (Mnih et al., 2016). 3.2 CUP Framework In this subsection we propose the overall framework of CUP. As shown in Fig. 1, at each iteration t, CUP first forms a guidance policy π̃tg according to Eq. 7, then provides additional guidance to policy search by regularizing the target policy πt+1tar to imitate π̃tg (Wu et al., 2019; Fujimoto & Gu, 2021). Specifically, CUP minimizes the following loss to optimize πt+1tar : LCUP (π t+1 tar ) = Lactor(π t+1 tar ) + Es∼D [ βsDKL ( πt+1tar (·|s) ||π̃tg (·|s) )] , (11) where Lactor is the original actor loss defined in Eq. (3), and βs > 0 is a hyper-parameter controlling the weight of regularization. In practice, we find that using a fixed weight for regularization has two problems. First, it is difficult to balance the scale between Lactor and the regularization term, because Lactor grows as the Q value gets larger. Secondly, a fixed weight cannot reflect the agent’s confidence on π̃tg. For example, when no source policies have positive soft expected advantages, π̃tg = π t tar. Then the agent should not imitate π̃tg anymore, as π̃tg cannot provide any guidance to further improve performance. Noticing that the soft expected advantage serves as a natural confidence measure, we weight the KL divergence with corresponding soft expected advantage at that state: βs = β1 min ( ẼAπttar (s, π̃ t g), β2|Ṽπttar (s)| ) , (12) where ẼAπttar (s, π̃ t g) = Ea∼π̃tg(·|s) [ Q̃πttar (s, a)− α log π t g(a|s)− Ṽπttar (s) ] is the approxi- mated soft expected advantage, β1, β2 > 0 are two hyper-parameters, and Ṽπttar (s) = Ea∼πttar(·|s) [ Q̃πttar (s, a)− α log π t tar(a|s) ] is the approximated soft value function. This adaptive regularization weight automatically balances between the two losses, and ignores the regularization term at states where π̃tg cannot improve over π t tar anymore. We further upper clip the expected advantage with the absolute value of β2Ṽπttar to avoid the agent being overly confident about π̃ t g due to function approximation error ϵ. CUP’s pseudo-code is presented in Alg. 1. The modifications CUP made to SAC are marked in red. Additional implementation details are deferred to Appendix D.1. Algorithm 1 CUP Require: Source policies {π1, π2, ..., πn}, hyper-parameters λθ1 , λθ2 , λπ, λα, τ,H, β1, β2 Initialize replay buffer D Initialize actor πϕ, entropy weight α, critic Qθ1 ,Qθ2 , target networks Qθ1 ← Qθ1 , Qθ2 ← Qθ2 while not done do for each environment step do at ∼ πθ st+1 ∼ p(st+1|st, at) D ← D ∪ {st, at, r(st, at), st+1} end for for each gradient step do Sample minibatch b from D Query source policies’ action probabilities {π1(·|s), π2(·|s), ..., πn(·|s)} for states in b Compute expected advantages according to Eq. (4), form π̃tg according to Eq. (7) θi ← θi − λQ∇̂θiLcritic(Qθi) for i ∈ {1, 2} ϕ← ϕ− λπ∇̂ϕLCUP (πϕ) α← α− λα∇̂αLentropy(α) θi ← τθi + (1− τ)θi for i ∈ {1, 2} end for end while 4 Experiments We evaluate on Meta-World (Yu et al., 2020), a popular reinforcement learning benchmark composed of multiple robot manipulation tasks. These tasks are both correlated (performed by the same Sawyer robot arm) and distinct (interacting with different objects and having different reward functions), and serve as a proper evaluation benchmark for policy reuse. The source policies are achieved by training on three representative tasks: Reach, Push, and Pick-Place. We choose several complex tasks as target tasks, including Hammer, Peg-Insert-Side, Push-Wall, Pick-Place-Wall, Push-Back, and Shelf-Place. Among these target tasks, Hammer and Peg-Insert-Side require interacting with objects unseen in the source tasks. In Push-Wall and Pick-Place-Wall, there is a wall between the object and the goal. In Push-Back, the goal distribution is different from Push. In Shelf-Place, the robot is required to put a block on a shelf, and the shelf is unseen in the source tasks. Video demonstrations of these tasks are available at https://meta-world.github.io/. Similar to the settings in Yang et al. (2020a), in our experiments the goal position is randomly reset at the start of every episode. Codes are available at https://github.com/NagisaZj/CUP. 4.1 Transfer Performance on Meta-World We compare against several representative baseline algorithms, including HAAR (Li et al., 2019), PTF (Yang et al., 2020b), MULTIPOLAR (Barekatain et al., 2021), and MAMBA (Cheng et al., 2020). Among these algorithms, HAAR and PTF learn hierarchical high-level policies over source policies. MAMBA aggregates source policies’ V functions to form a baseline function, and performs policy improvement over the baseline function. MULTIPOLAR learns a weighted sum of source policies’ action probabilities, and learns an additional network to predict residuals. We also compare against the original SAC algorithm. All the results are averaged over six random seeds. As shown in Figure 2, CUP is the only algorithm that achieves efficient transfer on all six tasks, significantly outperforming the original SAC algorithm. HAAR has a jump-start performance on Push-Wall and Pick-Pick-Wall, but fails to further improve due to optimization non-stationarity induced by jointly training high-level and low-level policies. MULTIPOLAR achieves comparable performance on Push-Wall and Peg-Insert-Side, because the Push source policy is useful on Push-Wall (implied by HAAR’s good jump-start performance), and learning residuals on Peg-Insert-Side is easier (implied by SAC’s fast learning). In Pick-Place-Wall, the Pick-Place source policy is useful, but the residual is difficult to learn, so MULTIPOLAR does not work. For the remaining three tasks, the source policies are less useful, and MULTIPOLAR fails on these tasks. PTF fails as its hierarchical policy only gets updated when the agent chooses similar actions to one of the source policies, which is quite rare when the source and target tasks are distinct. MAMBA fails as estimating all source policies’ V functions accurately is sampling inefficient. Algorithm performance evaluated by success rate is deferred to Appendix E.1. 4.2 Analyzing the Guidance Policy This subsection provides visualizations of CUP’s source policy selection. Fig. 3 shows the percentages of each source policy being selected throughout training on Push-Wall. At early stages of training, the source policies are selected more frequently as they have positive expected advantages, which means that they can be used to improve the current target policy. As training proceeds and the target policy becomes better, the source policies are selected less frequently. Among these three source policies, Push is chosen more frequently than the other two source policies, as it is more related to the target task. Figure 4 presents the source policies’ expected advantages over an episode at convergence in Pick-Place-Wall. The Push source policy and Reach source policy almost always have negative expected advantages, which implies that these two source policies can hardly improve the current target policy anymore. Meanwhile, the Pick-Place source policy has expected advantages close to zero after 100 environment steps, which implies that the Pick-Place source policy is close to the target policy at these steps. Analyses on all six tasks as well as analyses on HAAR’s source policy selection are deferred to Appendix E.2 and Appendix E.6, respectively. 4.3 Ablation Study This subsection evaluates CUP’s sensitivity to hyper-parameter settings and the number of source policies. We also evaluate CUP’s robustness against random source policies, which do not provide meaningful candidate actions for solving target tasks. 4.3.1 Hyper-Parameter Sensitivity For all the experiments in Section 4.1, we use the same set of hyper-parameters, which indicates that CUP is generally applicable to a wide range of tasks without particular fine-tuning. CUP introduces only two additional hyper-parameters to the underlying SAC algorithm, and we further test CUP’s sensitivity to these additional hyper-parameters. As shown in Fig. 5, CUP is generally robust to the choice of hyper-parameters and achieves stable performance. 4.3.2 Number of Source Policies We evaluate CUP as well as baseline algorithms on a larger source policy set. We add three policies to the original source policy set, which solve three simple tasks including Drawer-Close, Push-Wall, and Coffee-Button. This forms a source policy set composed of six policies. As shown in Fig. 6, CUP is still the only algorithm that solves all the six target tasks efficiently. MULTIPOLAR suffers from a decrease in performance, which indicates that learning the weighted sum of source policies’ actions becomes more difficult as the number of source policies grows. The rest of the baseline algorithms have similar performance to those using three source policies. Fig. 7 provides a more direct comparison of CUP’s performance with different number of source policies. CUP is able to utilize the additional source policies to further improve its performance, especially on Pick-Place-Wall and Peg-Insert-Side. Further detailed analysis is deferred to Appendix E.3. 4.3.3 Interference of Random Source Policies In order to evaluate the efficiency of CUP’s critic-guided source policy aggregation, we add random policies to the set of source policies. As shown in Fig. 8(a), adding up to 3 random source policies does not affect CUP’s performance. This indicates that CUP can efficiently choose which source policy to follow even if there exist many source policies that are not meaningful. Adding 4 and 5 random source policies leads to a slight drop in performance. This drop is because that as the number of random policies grows, more random actions are sampled, and taking argmax over these actions’ expected advantages is more likely to be affected by errors in value estimation. To further investigate CUP’s ability to ignore unsuitable source policies, we design another transfer setting that consists of another two source policy sets. The first set consists of three random policies that are useless for the target task, and the second set adds the Reach policy to the first set. As demonstrated in Fig. 8(b), when none of the source policies are useful, CUP performs similarly to the original SAC, and its sample efficiency is almost unaffected by the useless source policies. When there exists a useful source policy, CUP can efficiently utilize it to improve performance, even if there are many useless source policies. 5 Related Work Policy reuse. A series of works on policy reuse utilize source policies for exploration in value-based algorithms (Fernández & Veloso, 2006; Li & Zhang, 2018; Gimelfarb et al., 2021), but they are not applicable to policy gradient methods due to the off-policyness problem (Fujimoto et al., 2019). ACTeach (Kurenkov et al., 2020) mitigates this problem by improving the actor over behavior policy’s value estimation, but still fails in more complex tasks. One branch of methods train hierarchical high-level policies over source policies. CAPS (Li et al., 2018) guarantees the optimality of the hierarchical policies by adding primitive skills to the low-level policy set, but is inapplicable to MDPs with continuous action spaces. HAAR (Li et al., 2019) fine-tunes low-level policies to ensure optimality, but joint training of high-level and low-level policies induce optimization non-stationarity (Pateria et al., 2021). PTF (Yang et al., 2020b) trains a hierarchical policy, which is imitated by the target policy. However, the hierarchical policy only gets updated when the target policy chooses similar actions to one of the source policies, so PTF fails in complex tasks with large action spaces. Another branch of works aggregate source policies via their Q functions or V functions on the target task. Barreto et al. (2017) and Barreto et al. (2018) focus on the situation where source tasks and target tasks share the same dynamics, and aggregate source policies by choosing the policy that has the largest Q at each state. They use successor features to mitigate the heavy computation cost brought by estimating Q functions for all source policies. MAMBA (Cheng et al., 2020) forms a baseline function by aggregating source policies’ V functions, and guides policy search by improving the policy over the baseline function. Finally, MULTIPOLAR (Barekatain et al., 2021) learns a weighted sum over source policies’ actions, and learns an auxiliary network to predict residuals around the aggregated actions. MULTIPOLAR is computationally expensive, as it requires querying all the source policies at every sampling step. Our proposed method, CUP, focuses on the setting of learning continuous-action MDPs with actor-critic methods. CUP is both computationally and sampling efficient, as it does not require training any additional components. Policy regularization. Adding regularization to policy optimization is a common approach to induce prior knowledge into policy learning. Distral (Teh et al., 2017) achieves inter-task transfer by imitating an average policy distilled from policies of related tasks. In offline RL, policy regularization serves as a common technique to keep the policy close to the behavior policy used to collect the dataset (Wu et al., 2019; Nair et al., 2020; Fujimoto & Gu, 2021). CUP uses policy regularization as a means to provide additional guidance to policy search with the guidance policy. 6 Conclusion In this study, we address the problem of reusing source policies without training any additional components. By utilizing the critic as a natural evaluation of source policies, we propose CUP, an efficient policy reuse algorithm without training any additional components. CUP is conceptually simple, easy to implement, and has theoretical guarantees. Empirical results demonstrate that CUP achieves efficient transfer on a wide range of tasks. As for future work, CUP assumes that all source policies and the target policy share the same state and action spaces, which limits CUP’s application to more general scenarios. One possible future direction is to take inspiration from previous works that map the state and action spaces of an MDP to another MDP with similar high-level structure (Wan et al., 2020; Zhang et al., 2020; Heng et al., 2022; van der Pol et al., 2020b,a). Another interesting direction is to incorporate CUP into the continual learning setting (Rolnick et al., 2019; Khetarpal et al., 2020), in which an agent gradually enriches its source policy set in an online manner. Acknowledgements This work is supported in part by Science and Technology Innovation 2030 – “New Generation Artificial Intelligence” Major Project (No. 2018AAA0100904), National Natural Science Foundation of China (62176135), and China Academy of Launch Vehicle Technology (CALT2022-18).
1. How does the proposed method, CUP, handle different sets of source policies, especially when some policies are unsuitable or harmful for the target task? 2. Can the authors provide an intuitive explanation for why adding four random policies hurts the performance of CUP in Figure 5(b)? 3. How does CUP deal with situations where the guidance policy and the target policy are very different, and how does it affect the soft estimated advantage calculation? 4. What are the potential issues with choosing a source policy that is not beneficial for the target policy learning, and how can they be addressed? 5. How does the assumption of shared state and action space between source policies and the target policy limit the applicability of CUP, and what possible solutions exist to overcome this limitation?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper considers the problem of policy reusing in reinforcement learning. It assumes that there are a bunch of source policies pre-trained on related tasks. The agent is interacting with the environment to learn a target policy for the target task and hopes to make use of the available source policies. The problem is to determine when and how to use which source policy. This paper proposes CUP to employ the critic learned on the target task to select the proper source policy in each state. To be specific, there is a set of source policies and the agent's current target policy is also considered as one possible choice in the source policy set. CUP chooses the source policy with the largest one-step improvement over the current target policy. The chosen source policy in each state together forms the guidance policy. It is theoretically proved that the value of the guidance policy can be higher than the value of the current target policy if the learned critic is accurate enough. Then the target policy is trained to imitate the guidance policy by minimizing their KL divergence. The weight of this KL divergence term in policy learning is adaptively changed during training, according to the estimated advantages of the guidance policy. The authors conduct experiments on Meta-World and compare CUP with basic SAC, recent works HAAR, PTF, MULTIPOLAR, and MAMBA. The experimental results show the advantages of CUP. The ablative study shows that CUP is relatively robust to the choice of hyper-parameter value. Adding more source policies can be beneficial if the source policy is related to the target task. In the set of source policies, adding up to 3 random policies does not hurt the performance of CUP, but adding 4 random policies is problematic. Strengths And Weaknesses The paper is well-organized and generally written clearly. The proposed method CUP is novel and interesting with theoretical and empirical support. Pros: The proposed method is technically reasonable and supported by the theoretical ground. The evaluation is solid and analysis of CUP in ablation study help understand CUP better. Cons: The proposed method theoretically relies on a well-trained critic. However, the choice of source policy might be problematic if the value estimate by the critic is not accurate enough, especially when the source policy and target policy are quite different. One critical detail is not clearly explained in the paper. How to calculate the soft estimated advantage for each source policy according to equation (4)? Getting the expectation seems not very simple given continuous action space. Then it is hard to tell whether CUP is really much more convenient than prior works using hierarchical reinforcement learning or source policy value estimation. Questions Could the proposed method perform robustly on settings with different sets of source policies? For example, what will happen if there are some policies really unsuitable or even harmful for the target task? Could CUP properly ignore these source policies? Any intuitive explanation about why 4 random policies hurt the performance much in Figure 5(b)? To get the guidance policy according to equation (7), what will happen when π and π t a r t are very different and $\tilde{Q}{\pi^t{tar}} s u f f e r s f r o m t h e o v e r − e s t i m a t i o n i s s u e ? F o r e x a m p l e , t h e t a r g e t p o l i c y \pi^t_{tar} m a y r a r e l y s e l e c t a n a c t i o n a_0 a t t h e s t a t e s , s o t h e v a l u e e s t i m a t e \tilde{Q}{\pi^t{tar}}(s,a_0) i s m u c h h i g h e r t h a n t h e t r u e v a l u e Q_{\pi^t_{tar}}(s,a_0) . T h e n t h e s o u r c e p o l i c y s e l e c t i n g a c t i o n a_0 o f t e n a t t h e s t a t e a$ will be chosen at this state. Yet, it may not be really beneficial for the target policy learning. This choice of source policy will hurt the sample efficiency of CUP. Do you observe this issue? Any comments about preventing it? Limitations It seems that the authors just briefly mention one limitation in section 2. "We assume that the source policies and the target policy share the same state and action space". This assumption is widely used in prior works about policy transfer, and the authors did not propose to solve this issue. One possible choice is to learn state and action correspondence to transfer the source policy to the target state and action space, e.g., 'Learning Cross-Domain Correspondence for Control with Dynamics Cycle-Consistency'.
NIPS
Title CUP: Critic-Guided Policy Reuse Abstract The ability to reuse previous policies is an important aspect of human intelligence. To achieve efficient policy reuse, a Deep Reinforcement Learning (DRL) agent needs to decide when to reuse and which source policies to reuse. Previous methods solve this problem by introducing extra components to the underlying algorithm, such as hierarchical high-level policies over source policies, or estimations of source policies’ value functions on the target task. However, training these components induces either optimization non-stationarity or heavy sampling cost, significantly impairing the effectiveness of transfer. To tackle this problem, we propose a novel policy reuse algorithm called Critic-gUided Policy reuse (CUP), which avoids training any extra components and efficiently reuses source policies. CUP utilizes the critic, a common component in actor-critic methods, to evaluate and choose source policies. At each state, CUP chooses the source policy that has the largest one-step improvement over the current target policy, and forms a guidance policy. The guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy. Then the target policy is regularized to imitate the guidance policy to perform efficient policy search. Empirical results demonstrate that CUP achieves efficient transfer and significantly outperforms baseline algorithms. 1 Introduction Human intelligence can solve new tasks quickly by reusing previous policies (Guberman & Greenfield, 1991). Despite remarkable success, current Deep Reinforcement Learning (DRL) agents lack this knowledge transfer ability (Silver et al., 2017; Vinyals et al., 2019; Ceron & Castro, 2021), leading to enormous computation and sampling cost. As a consequence, a large number of works have been studying the problem of policy reuse in DRL, i.e., how to efficiently reuse source policies to speed up target policy learning (Fernández & Veloso, 2006; Barreto et al., 2018; Li et al., 2019; Yang et al., 2020b). A fundamental challenge towards policy reuse is: how does an agent with access to multiple source policies decide when and where to use them (Fernández & Veloso, 2006; Kurenkov et al., 2020; Cheng et al., 2020)? Previous methods solve this problem by introducing additional components to the underlying DRL algorithm, such as hierarchical high-level policies over source policies (Li et al., 2018, 2019; Yang et al., 2020b), or estimations of source policies’ value functions on the target task (Barreto et al., 2017, 2018; Cheng et al., 2020). However, training these components significantly impairs the effectiveness of transfer, as hierarchical structures induce optimization non-stationarity (Pateria et al., 2021), and estimating the value functions for every source policy is computationally expensive and with high sampling cost. Thus, the objective of this study is to address the question: 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Can we achieve efficient transfer without training additional components? Notice that actor-critic methods (Lillicrap et al., 2016; Fujimoto et al., 2018; Haarnoja et al., 2018) learn a critic that approximates the actor’s Q function and serves as a natural way to evaluate policies. Based on this observation, we propose a novel policy reuse algorithm that utilizes the critic to choose source policies. The proposed algorithm, called Critic-gUided Policy reuse (CUP), avoids training any additional components and achieves efficient transfer. At each state, CUP chooses the source policy that has the largest one-step improvement over the current target policy, thus forming a guidance policy. Then CUP guides learning by regularizing the target policy to imitate the guidance policy. This approach has the following advantages. First, the one-step improvement can be estimated simply by querying the critic, and no additional components are needed to be trained. Secondly, the guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy, which ensures that CUP can reuse the source policies to improve the current target policy. Finally, CUP is conceptually simple and easy to implement, introducing very few hyper-parameters to the underlying algorithm. We evaluate CUP on Meta-World (Yu et al., 2020), a popular reinforcement learning benchmark composed of multiple robot arm manipulation tasks. Empirical results demonstrate that CUP achieves efficient transfer and significantly outperforms baseline algorithms. 2 Preliminaries Reinforcement learning (RL) deals with Markov Decision Processes (MDPs). A MDP can be modelled by a tuple (S,A, r, p, γ), with state space S, action space A, reward function r(s, a), transition function p(s′|s, a), and discount factor γ (Sutton & Barto, 2018). In this study, we focus on MDPs with continuous action spaces. RL’s objective is to find a policy π(a|s) that maximizes the cumulative discounted return R(π) = Eπ [ ∑∞ t=0 γ tr(st, at)]. While CUP is generally applicable to a wide range of actor-critic algorithms, in this work we use SAC (Haarnoja et al., 2018) as the underlying algorithm. The soft Q function and soft V function (Haarnoja et al., 2017) of a policy π are defined as: Qπ(s, a) = r(s, a) + γEs′∼p(·|s,a) [Vπ(s)] (1) Vπ(s) = Ea∼π(·|s) [Qπ(s, a)− α log π(a|s)] , (2) where α > 0 is the entropy weight. SAC’s loss functions are defined as: Lcritic(Qθ) = E(s,a,r,s′)∼D [ Qθ(s, a)− (r + γVθ(s ′)) ]2 Lactor(πϕ) = Es∼D [ Ea∼πϕ(·|s) [α log πϕ(a|s)−Qθ(s, a)] ] Lentropy(α) = Es∼D [ Ea∼πϕ(·|s) [ −α log πϕ(a|s)− αH ]] , (3) where D is the replay buffer, H is a hyper-parameter representing the target entropy, θ and ϕ are network parameters, θ is target network’s parameters, and Vθ(s) = Ea∼π(a|s)[Qθ(s, a) − α log π(a|s)] is the target soft value function. We define the soft expected advantage of action probability distribution πi(·|s) over policy πj at state s as: EAπj (s, πi) = Ea∼πi(·|s) [ Qπj (s, a)− α log πi(a|s)− Vπj (s) ] . (4) EAπj (s, πi) measures the one-step performance improvement brought by following πi instead of πj at state s, and following πj afterwards. The field of policy reuse focuses on solving a target MDP M efficiently by transferring knowledge from a set of source policies {π1, π2, ..., πn}. We denote the target policy learned on M at iteration t as πttar, and its corresponding soft Q function as Qπttar . In this work, we assume that the source policies and the target policy share the same state and action spaces. 3 Critic-Guided Policy Reuse This section presents CUP, an efficient policy reuse algorithm that does not require training any additional components. CUP is built upon actor-critic methods. In each iteration, CUP uses the critic to form a guidance policy from the source policies and the current target policy. Then CUP guides policy search by regularizing the target policy to imitate the guidance policy. Section 3.1 presents how to form a guidance policy by aggregating source policies through the critic, and proves that the guidance policy is guaranteed to be a monotonic improvement over the current target policy. We also prove that the target policy is theoretically guaranteed to improve by imitating the guidance policy. Section 3.2 presents the overall framework of CUP. 3.1 Critic-Guided Source Policy Aggregation CUP utilizes action probabilities proposed by source policies to improve the current target policy, and forms a guidance policy. At iteration t of target policy learning, for each state s, the agent has access to a set of candidate action probability distributions proposed by the n source policies and the current target policy: Πst = {π1(·|s), π2(·|s), ..., πn(·|s), πttar(·|s)}. The guidance policy πtg can be formed by combining the action probability distributions that have the largest soft expected advantage over πttar at each state s: πtg(·|s) = argmax π(·|s)∈Πst EAπttar (s, π) = argmax π(·|s)∈Πst Ea∼π(·|s) [ Qπttar (s, a)− α log π(a|s) ] for all s ∈ S. (5) The second equation holds as adding Vπttar (s) to all soft expected advantages does not affect the result of the argmax operator. Eq. 5 implies that at each state, we can choose which source policy to follow simply by querying its expected soft Q value under πttar. Noticing that with function approximation, the exact soft Q value cannot be acquired. The following theorem enables us to form the guidance policy with an approximated soft Q function, and guarantees that the guidance policy is a monotonic improvement over the current target policy. Theorem 1 Let Q̃πttar be an approximation of Qπttar such that |Q̃πttar (s, a)−Qπttar (s, a)| ≤ ϵ for all s ∈ S, a ∈ A. (6) Define π̃tg(·|s) = argmax π(·|s)∈Πst Ea∼π(·|s) [ Q̃πttar (s, a)− α log π(a|s) ] for all s ∈ S. (7) Then, V π̃tg (s) ≥ Vπttar (s)− 2ϵ 1− γ for all s ∈ S. (8) Theorem 1 provides a way to choose source policies using an approximation of the current target policy’s soft Q value. As SAC learns such an approximation, the guidance policy can be formed without training any additional components. The next question is, how to incorporate the guidance policy π̃tg into target policy learning? The following theorem demonstrates that policy improvement can be guaranteed if the target policy is optimized to stay close to the guidance policy. Theorem 2 If DKL ( πt+1tar (·|s)||π̃tg(·|s) ) ≤ δ for all s ∈ S, (9) then Vπt+1tar (s) ≥ Vπttar (s)− √ 2 ln 2δ(R̃max + αHt+1max) (1− γ)2 − 2ϵ+ αH̃max 1− γ for all s ∈ S, (10) where R̃max = max s,a |r(s, a)| is the largest possible absolute value of the reward, Ht+1max = max s H(πt+1tar (·|s)) is the largest entropy of πt+1tar , and H̃max = max s ∣∣H(πttar(·|s))−H(πt+1tar (·|s))∣∣ is the largest possible absolute difference of the policy entropy. According to Theorem 2, the target policy can be improved by minimizing the KL divergence between the target policy and the guidance policy. Thus we can use the KL divergence as an auxiliary loss to guide target policy learning. Proofs of this section are deferred to Appendix B.1 and Appendix B.2. Theorem 1 and Theorem 2 can be extended to common “hard” value functions (deferred to Appendix B.3), so CUP is also applicable to actor-critic algorithms that uses “hard” Bellman updates, such as A3C (Mnih et al., 2016). 3.2 CUP Framework In this subsection we propose the overall framework of CUP. As shown in Fig. 1, at each iteration t, CUP first forms a guidance policy π̃tg according to Eq. 7, then provides additional guidance to policy search by regularizing the target policy πt+1tar to imitate π̃tg (Wu et al., 2019; Fujimoto & Gu, 2021). Specifically, CUP minimizes the following loss to optimize πt+1tar : LCUP (π t+1 tar ) = Lactor(π t+1 tar ) + Es∼D [ βsDKL ( πt+1tar (·|s) ||π̃tg (·|s) )] , (11) where Lactor is the original actor loss defined in Eq. (3), and βs > 0 is a hyper-parameter controlling the weight of regularization. In practice, we find that using a fixed weight for regularization has two problems. First, it is difficult to balance the scale between Lactor and the regularization term, because Lactor grows as the Q value gets larger. Secondly, a fixed weight cannot reflect the agent’s confidence on π̃tg. For example, when no source policies have positive soft expected advantages, π̃tg = π t tar. Then the agent should not imitate π̃tg anymore, as π̃tg cannot provide any guidance to further improve performance. Noticing that the soft expected advantage serves as a natural confidence measure, we weight the KL divergence with corresponding soft expected advantage at that state: βs = β1 min ( ẼAπttar (s, π̃ t g), β2|Ṽπttar (s)| ) , (12) where ẼAπttar (s, π̃ t g) = Ea∼π̃tg(·|s) [ Q̃πttar (s, a)− α log π t g(a|s)− Ṽπttar (s) ] is the approxi- mated soft expected advantage, β1, β2 > 0 are two hyper-parameters, and Ṽπttar (s) = Ea∼πttar(·|s) [ Q̃πttar (s, a)− α log π t tar(a|s) ] is the approximated soft value function. This adaptive regularization weight automatically balances between the two losses, and ignores the regularization term at states where π̃tg cannot improve over π t tar anymore. We further upper clip the expected advantage with the absolute value of β2Ṽπttar to avoid the agent being overly confident about π̃ t g due to function approximation error ϵ. CUP’s pseudo-code is presented in Alg. 1. The modifications CUP made to SAC are marked in red. Additional implementation details are deferred to Appendix D.1. Algorithm 1 CUP Require: Source policies {π1, π2, ..., πn}, hyper-parameters λθ1 , λθ2 , λπ, λα, τ,H, β1, β2 Initialize replay buffer D Initialize actor πϕ, entropy weight α, critic Qθ1 ,Qθ2 , target networks Qθ1 ← Qθ1 , Qθ2 ← Qθ2 while not done do for each environment step do at ∼ πθ st+1 ∼ p(st+1|st, at) D ← D ∪ {st, at, r(st, at), st+1} end for for each gradient step do Sample minibatch b from D Query source policies’ action probabilities {π1(·|s), π2(·|s), ..., πn(·|s)} for states in b Compute expected advantages according to Eq. (4), form π̃tg according to Eq. (7) θi ← θi − λQ∇̂θiLcritic(Qθi) for i ∈ {1, 2} ϕ← ϕ− λπ∇̂ϕLCUP (πϕ) α← α− λα∇̂αLentropy(α) θi ← τθi + (1− τ)θi for i ∈ {1, 2} end for end while 4 Experiments We evaluate on Meta-World (Yu et al., 2020), a popular reinforcement learning benchmark composed of multiple robot manipulation tasks. These tasks are both correlated (performed by the same Sawyer robot arm) and distinct (interacting with different objects and having different reward functions), and serve as a proper evaluation benchmark for policy reuse. The source policies are achieved by training on three representative tasks: Reach, Push, and Pick-Place. We choose several complex tasks as target tasks, including Hammer, Peg-Insert-Side, Push-Wall, Pick-Place-Wall, Push-Back, and Shelf-Place. Among these target tasks, Hammer and Peg-Insert-Side require interacting with objects unseen in the source tasks. In Push-Wall and Pick-Place-Wall, there is a wall between the object and the goal. In Push-Back, the goal distribution is different from Push. In Shelf-Place, the robot is required to put a block on a shelf, and the shelf is unseen in the source tasks. Video demonstrations of these tasks are available at https://meta-world.github.io/. Similar to the settings in Yang et al. (2020a), in our experiments the goal position is randomly reset at the start of every episode. Codes are available at https://github.com/NagisaZj/CUP. 4.1 Transfer Performance on Meta-World We compare against several representative baseline algorithms, including HAAR (Li et al., 2019), PTF (Yang et al., 2020b), MULTIPOLAR (Barekatain et al., 2021), and MAMBA (Cheng et al., 2020). Among these algorithms, HAAR and PTF learn hierarchical high-level policies over source policies. MAMBA aggregates source policies’ V functions to form a baseline function, and performs policy improvement over the baseline function. MULTIPOLAR learns a weighted sum of source policies’ action probabilities, and learns an additional network to predict residuals. We also compare against the original SAC algorithm. All the results are averaged over six random seeds. As shown in Figure 2, CUP is the only algorithm that achieves efficient transfer on all six tasks, significantly outperforming the original SAC algorithm. HAAR has a jump-start performance on Push-Wall and Pick-Pick-Wall, but fails to further improve due to optimization non-stationarity induced by jointly training high-level and low-level policies. MULTIPOLAR achieves comparable performance on Push-Wall and Peg-Insert-Side, because the Push source policy is useful on Push-Wall (implied by HAAR’s good jump-start performance), and learning residuals on Peg-Insert-Side is easier (implied by SAC’s fast learning). In Pick-Place-Wall, the Pick-Place source policy is useful, but the residual is difficult to learn, so MULTIPOLAR does not work. For the remaining three tasks, the source policies are less useful, and MULTIPOLAR fails on these tasks. PTF fails as its hierarchical policy only gets updated when the agent chooses similar actions to one of the source policies, which is quite rare when the source and target tasks are distinct. MAMBA fails as estimating all source policies’ V functions accurately is sampling inefficient. Algorithm performance evaluated by success rate is deferred to Appendix E.1. 4.2 Analyzing the Guidance Policy This subsection provides visualizations of CUP’s source policy selection. Fig. 3 shows the percentages of each source policy being selected throughout training on Push-Wall. At early stages of training, the source policies are selected more frequently as they have positive expected advantages, which means that they can be used to improve the current target policy. As training proceeds and the target policy becomes better, the source policies are selected less frequently. Among these three source policies, Push is chosen more frequently than the other two source policies, as it is more related to the target task. Figure 4 presents the source policies’ expected advantages over an episode at convergence in Pick-Place-Wall. The Push source policy and Reach source policy almost always have negative expected advantages, which implies that these two source policies can hardly improve the current target policy anymore. Meanwhile, the Pick-Place source policy has expected advantages close to zero after 100 environment steps, which implies that the Pick-Place source policy is close to the target policy at these steps. Analyses on all six tasks as well as analyses on HAAR’s source policy selection are deferred to Appendix E.2 and Appendix E.6, respectively. 4.3 Ablation Study This subsection evaluates CUP’s sensitivity to hyper-parameter settings and the number of source policies. We also evaluate CUP’s robustness against random source policies, which do not provide meaningful candidate actions for solving target tasks. 4.3.1 Hyper-Parameter Sensitivity For all the experiments in Section 4.1, we use the same set of hyper-parameters, which indicates that CUP is generally applicable to a wide range of tasks without particular fine-tuning. CUP introduces only two additional hyper-parameters to the underlying SAC algorithm, and we further test CUP’s sensitivity to these additional hyper-parameters. As shown in Fig. 5, CUP is generally robust to the choice of hyper-parameters and achieves stable performance. 4.3.2 Number of Source Policies We evaluate CUP as well as baseline algorithms on a larger source policy set. We add three policies to the original source policy set, which solve three simple tasks including Drawer-Close, Push-Wall, and Coffee-Button. This forms a source policy set composed of six policies. As shown in Fig. 6, CUP is still the only algorithm that solves all the six target tasks efficiently. MULTIPOLAR suffers from a decrease in performance, which indicates that learning the weighted sum of source policies’ actions becomes more difficult as the number of source policies grows. The rest of the baseline algorithms have similar performance to those using three source policies. Fig. 7 provides a more direct comparison of CUP’s performance with different number of source policies. CUP is able to utilize the additional source policies to further improve its performance, especially on Pick-Place-Wall and Peg-Insert-Side. Further detailed analysis is deferred to Appendix E.3. 4.3.3 Interference of Random Source Policies In order to evaluate the efficiency of CUP’s critic-guided source policy aggregation, we add random policies to the set of source policies. As shown in Fig. 8(a), adding up to 3 random source policies does not affect CUP’s performance. This indicates that CUP can efficiently choose which source policy to follow even if there exist many source policies that are not meaningful. Adding 4 and 5 random source policies leads to a slight drop in performance. This drop is because that as the number of random policies grows, more random actions are sampled, and taking argmax over these actions’ expected advantages is more likely to be affected by errors in value estimation. To further investigate CUP’s ability to ignore unsuitable source policies, we design another transfer setting that consists of another two source policy sets. The first set consists of three random policies that are useless for the target task, and the second set adds the Reach policy to the first set. As demonstrated in Fig. 8(b), when none of the source policies are useful, CUP performs similarly to the original SAC, and its sample efficiency is almost unaffected by the useless source policies. When there exists a useful source policy, CUP can efficiently utilize it to improve performance, even if there are many useless source policies. 5 Related Work Policy reuse. A series of works on policy reuse utilize source policies for exploration in value-based algorithms (Fernández & Veloso, 2006; Li & Zhang, 2018; Gimelfarb et al., 2021), but they are not applicable to policy gradient methods due to the off-policyness problem (Fujimoto et al., 2019). ACTeach (Kurenkov et al., 2020) mitigates this problem by improving the actor over behavior policy’s value estimation, but still fails in more complex tasks. One branch of methods train hierarchical high-level policies over source policies. CAPS (Li et al., 2018) guarantees the optimality of the hierarchical policies by adding primitive skills to the low-level policy set, but is inapplicable to MDPs with continuous action spaces. HAAR (Li et al., 2019) fine-tunes low-level policies to ensure optimality, but joint training of high-level and low-level policies induce optimization non-stationarity (Pateria et al., 2021). PTF (Yang et al., 2020b) trains a hierarchical policy, which is imitated by the target policy. However, the hierarchical policy only gets updated when the target policy chooses similar actions to one of the source policies, so PTF fails in complex tasks with large action spaces. Another branch of works aggregate source policies via their Q functions or V functions on the target task. Barreto et al. (2017) and Barreto et al. (2018) focus on the situation where source tasks and target tasks share the same dynamics, and aggregate source policies by choosing the policy that has the largest Q at each state. They use successor features to mitigate the heavy computation cost brought by estimating Q functions for all source policies. MAMBA (Cheng et al., 2020) forms a baseline function by aggregating source policies’ V functions, and guides policy search by improving the policy over the baseline function. Finally, MULTIPOLAR (Barekatain et al., 2021) learns a weighted sum over source policies’ actions, and learns an auxiliary network to predict residuals around the aggregated actions. MULTIPOLAR is computationally expensive, as it requires querying all the source policies at every sampling step. Our proposed method, CUP, focuses on the setting of learning continuous-action MDPs with actor-critic methods. CUP is both computationally and sampling efficient, as it does not require training any additional components. Policy regularization. Adding regularization to policy optimization is a common approach to induce prior knowledge into policy learning. Distral (Teh et al., 2017) achieves inter-task transfer by imitating an average policy distilled from policies of related tasks. In offline RL, policy regularization serves as a common technique to keep the policy close to the behavior policy used to collect the dataset (Wu et al., 2019; Nair et al., 2020; Fujimoto & Gu, 2021). CUP uses policy regularization as a means to provide additional guidance to policy search with the guidance policy. 6 Conclusion In this study, we address the problem of reusing source policies without training any additional components. By utilizing the critic as a natural evaluation of source policies, we propose CUP, an efficient policy reuse algorithm without training any additional components. CUP is conceptually simple, easy to implement, and has theoretical guarantees. Empirical results demonstrate that CUP achieves efficient transfer on a wide range of tasks. As for future work, CUP assumes that all source policies and the target policy share the same state and action spaces, which limits CUP’s application to more general scenarios. One possible future direction is to take inspiration from previous works that map the state and action spaces of an MDP to another MDP with similar high-level structure (Wan et al., 2020; Zhang et al., 2020; Heng et al., 2022; van der Pol et al., 2020b,a). Another interesting direction is to incorporate CUP into the continual learning setting (Rolnick et al., 2019; Khetarpal et al., 2020), in which an agent gradually enriches its source policy set in an online manner. Acknowledgements This work is supported in part by Science and Technology Innovation 2030 – “New Generation Artificial Intelligence” Major Project (No. 2018AAA0100904), National Natural Science Foundation of China (62176135), and China Academy of Launch Vehicle Technology (CALT2022-18).
1. What is the main contribution of the paper regarding policy reuse? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its theoretical guarantees and experimental results? 3. Do you have any questions or concerns about the paper's assumptions and limitations, especially regarding its applicability to real-world scenarios? 4. How does the reviewer assess the clarity and adequacy of the paper's writing and experimental analysis? 5. Are there any suggestions for future improvements or directions for extending the CUP algorithm to more general scenarios?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a novel policy reuse algorithm Critic-gUided Policy reuse (CUP), which avoids training any extra components and efficiently reuses source policies. CUP chooses the source policy at each state that has the largest one-step improvement over the current target policy and forms a guidance policy. The target policy can be regularized to imitate the guidance policy to perform an efficient policy search. Strengths And Weaknesses Strengths: This paper not only proves that the guidance policy is guaranteed to be a monotonic improvement over the current target policy but also proves that the target policy is theoretically guaranteed to improve by imitating the guidance policy. The experimental part of this paper is adequate in addition to transfer performance, including analyzing the guidance policy, the sensitivity to hyper-parameter settings and the number of source policies, and the interference with random source policies. The writing ideas of this part are also very clear. Weaknesses: Some descriptions are not clear and should be clarified. In the experimental part, I have questions about the analysis of some experimental results. For example, I don't see a noticeable improvement in the comparison of CUP’s performance with different numbers of source policies, which cannot be concluded that CUP is able to utilize the additional source policies to further improve its performance. The conclusion merely repeats what was said in the introduction, lacking the limitations of CUP. For example, CUP obeys a very strong assumption that the source policies and the target policy share the same state and action spaces, which is not common in the real world. The strong assumption limits the extension of CUP to more general scenarios[1-3]. The author should add some reflections on directions for future improvements. [1] Mutual Information Based Knowledge Transfer Under State-Action Dimension Mismatch. UAI 2020. [2] Learning Cross-Domain Correspondence for Control with Dynamics Cycle-Consistency. ICLR 2021. [3] Cross-domain Adaptive Transfer Reinforcement Learning Based on State-Action Correspondence. UAI 2022. Questions In the experimental part, I don't see a noticeable improvement in the comparison of CUP’s performance with different numbers of source policies, which cannot be concluded that CUP is able to utilize the additional source policies to further improve its performance. From how CUP works, it should not be affected by random policies because it chooses the largest soft expected advantage at each state s . As the number of random policies grows, the performance should not be affected even though the computational complexity will increase. But why CUP can only adapt to 3 random policies? Limitations The author should discuss the limitations of CUP. For example, CUP obeys a very strong assumption that the source policies and the target policy share the same state and action spaces, which is not common in the real world. For example, how can CUP be applied to different robot transfer settings, e.g., different types or different numbers of joints? The strong assumption limits the extension of CUP to more general scenarios.
NIPS
Title CUP: Critic-Guided Policy Reuse Abstract The ability to reuse previous policies is an important aspect of human intelligence. To achieve efficient policy reuse, a Deep Reinforcement Learning (DRL) agent needs to decide when to reuse and which source policies to reuse. Previous methods solve this problem by introducing extra components to the underlying algorithm, such as hierarchical high-level policies over source policies, or estimations of source policies’ value functions on the target task. However, training these components induces either optimization non-stationarity or heavy sampling cost, significantly impairing the effectiveness of transfer. To tackle this problem, we propose a novel policy reuse algorithm called Critic-gUided Policy reuse (CUP), which avoids training any extra components and efficiently reuses source policies. CUP utilizes the critic, a common component in actor-critic methods, to evaluate and choose source policies. At each state, CUP chooses the source policy that has the largest one-step improvement over the current target policy, and forms a guidance policy. The guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy. Then the target policy is regularized to imitate the guidance policy to perform efficient policy search. Empirical results demonstrate that CUP achieves efficient transfer and significantly outperforms baseline algorithms. 1 Introduction Human intelligence can solve new tasks quickly by reusing previous policies (Guberman & Greenfield, 1991). Despite remarkable success, current Deep Reinforcement Learning (DRL) agents lack this knowledge transfer ability (Silver et al., 2017; Vinyals et al., 2019; Ceron & Castro, 2021), leading to enormous computation and sampling cost. As a consequence, a large number of works have been studying the problem of policy reuse in DRL, i.e., how to efficiently reuse source policies to speed up target policy learning (Fernández & Veloso, 2006; Barreto et al., 2018; Li et al., 2019; Yang et al., 2020b). A fundamental challenge towards policy reuse is: how does an agent with access to multiple source policies decide when and where to use them (Fernández & Veloso, 2006; Kurenkov et al., 2020; Cheng et al., 2020)? Previous methods solve this problem by introducing additional components to the underlying DRL algorithm, such as hierarchical high-level policies over source policies (Li et al., 2018, 2019; Yang et al., 2020b), or estimations of source policies’ value functions on the target task (Barreto et al., 2017, 2018; Cheng et al., 2020). However, training these components significantly impairs the effectiveness of transfer, as hierarchical structures induce optimization non-stationarity (Pateria et al., 2021), and estimating the value functions for every source policy is computationally expensive and with high sampling cost. Thus, the objective of this study is to address the question: 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Can we achieve efficient transfer without training additional components? Notice that actor-critic methods (Lillicrap et al., 2016; Fujimoto et al., 2018; Haarnoja et al., 2018) learn a critic that approximates the actor’s Q function and serves as a natural way to evaluate policies. Based on this observation, we propose a novel policy reuse algorithm that utilizes the critic to choose source policies. The proposed algorithm, called Critic-gUided Policy reuse (CUP), avoids training any additional components and achieves efficient transfer. At each state, CUP chooses the source policy that has the largest one-step improvement over the current target policy, thus forming a guidance policy. Then CUP guides learning by regularizing the target policy to imitate the guidance policy. This approach has the following advantages. First, the one-step improvement can be estimated simply by querying the critic, and no additional components are needed to be trained. Secondly, the guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy, which ensures that CUP can reuse the source policies to improve the current target policy. Finally, CUP is conceptually simple and easy to implement, introducing very few hyper-parameters to the underlying algorithm. We evaluate CUP on Meta-World (Yu et al., 2020), a popular reinforcement learning benchmark composed of multiple robot arm manipulation tasks. Empirical results demonstrate that CUP achieves efficient transfer and significantly outperforms baseline algorithms. 2 Preliminaries Reinforcement learning (RL) deals with Markov Decision Processes (MDPs). A MDP can be modelled by a tuple (S,A, r, p, γ), with state space S, action space A, reward function r(s, a), transition function p(s′|s, a), and discount factor γ (Sutton & Barto, 2018). In this study, we focus on MDPs with continuous action spaces. RL’s objective is to find a policy π(a|s) that maximizes the cumulative discounted return R(π) = Eπ [ ∑∞ t=0 γ tr(st, at)]. While CUP is generally applicable to a wide range of actor-critic algorithms, in this work we use SAC (Haarnoja et al., 2018) as the underlying algorithm. The soft Q function and soft V function (Haarnoja et al., 2017) of a policy π are defined as: Qπ(s, a) = r(s, a) + γEs′∼p(·|s,a) [Vπ(s)] (1) Vπ(s) = Ea∼π(·|s) [Qπ(s, a)− α log π(a|s)] , (2) where α > 0 is the entropy weight. SAC’s loss functions are defined as: Lcritic(Qθ) = E(s,a,r,s′)∼D [ Qθ(s, a)− (r + γVθ(s ′)) ]2 Lactor(πϕ) = Es∼D [ Ea∼πϕ(·|s) [α log πϕ(a|s)−Qθ(s, a)] ] Lentropy(α) = Es∼D [ Ea∼πϕ(·|s) [ −α log πϕ(a|s)− αH ]] , (3) where D is the replay buffer, H is a hyper-parameter representing the target entropy, θ and ϕ are network parameters, θ is target network’s parameters, and Vθ(s) = Ea∼π(a|s)[Qθ(s, a) − α log π(a|s)] is the target soft value function. We define the soft expected advantage of action probability distribution πi(·|s) over policy πj at state s as: EAπj (s, πi) = Ea∼πi(·|s) [ Qπj (s, a)− α log πi(a|s)− Vπj (s) ] . (4) EAπj (s, πi) measures the one-step performance improvement brought by following πi instead of πj at state s, and following πj afterwards. The field of policy reuse focuses on solving a target MDP M efficiently by transferring knowledge from a set of source policies {π1, π2, ..., πn}. We denote the target policy learned on M at iteration t as πttar, and its corresponding soft Q function as Qπttar . In this work, we assume that the source policies and the target policy share the same state and action spaces. 3 Critic-Guided Policy Reuse This section presents CUP, an efficient policy reuse algorithm that does not require training any additional components. CUP is built upon actor-critic methods. In each iteration, CUP uses the critic to form a guidance policy from the source policies and the current target policy. Then CUP guides policy search by regularizing the target policy to imitate the guidance policy. Section 3.1 presents how to form a guidance policy by aggregating source policies through the critic, and proves that the guidance policy is guaranteed to be a monotonic improvement over the current target policy. We also prove that the target policy is theoretically guaranteed to improve by imitating the guidance policy. Section 3.2 presents the overall framework of CUP. 3.1 Critic-Guided Source Policy Aggregation CUP utilizes action probabilities proposed by source policies to improve the current target policy, and forms a guidance policy. At iteration t of target policy learning, for each state s, the agent has access to a set of candidate action probability distributions proposed by the n source policies and the current target policy: Πst = {π1(·|s), π2(·|s), ..., πn(·|s), πttar(·|s)}. The guidance policy πtg can be formed by combining the action probability distributions that have the largest soft expected advantage over πttar at each state s: πtg(·|s) = argmax π(·|s)∈Πst EAπttar (s, π) = argmax π(·|s)∈Πst Ea∼π(·|s) [ Qπttar (s, a)− α log π(a|s) ] for all s ∈ S. (5) The second equation holds as adding Vπttar (s) to all soft expected advantages does not affect the result of the argmax operator. Eq. 5 implies that at each state, we can choose which source policy to follow simply by querying its expected soft Q value under πttar. Noticing that with function approximation, the exact soft Q value cannot be acquired. The following theorem enables us to form the guidance policy with an approximated soft Q function, and guarantees that the guidance policy is a monotonic improvement over the current target policy. Theorem 1 Let Q̃πttar be an approximation of Qπttar such that |Q̃πttar (s, a)−Qπttar (s, a)| ≤ ϵ for all s ∈ S, a ∈ A. (6) Define π̃tg(·|s) = argmax π(·|s)∈Πst Ea∼π(·|s) [ Q̃πttar (s, a)− α log π(a|s) ] for all s ∈ S. (7) Then, V π̃tg (s) ≥ Vπttar (s)− 2ϵ 1− γ for all s ∈ S. (8) Theorem 1 provides a way to choose source policies using an approximation of the current target policy’s soft Q value. As SAC learns such an approximation, the guidance policy can be formed without training any additional components. The next question is, how to incorporate the guidance policy π̃tg into target policy learning? The following theorem demonstrates that policy improvement can be guaranteed if the target policy is optimized to stay close to the guidance policy. Theorem 2 If DKL ( πt+1tar (·|s)||π̃tg(·|s) ) ≤ δ for all s ∈ S, (9) then Vπt+1tar (s) ≥ Vπttar (s)− √ 2 ln 2δ(R̃max + αHt+1max) (1− γ)2 − 2ϵ+ αH̃max 1− γ for all s ∈ S, (10) where R̃max = max s,a |r(s, a)| is the largest possible absolute value of the reward, Ht+1max = max s H(πt+1tar (·|s)) is the largest entropy of πt+1tar , and H̃max = max s ∣∣H(πttar(·|s))−H(πt+1tar (·|s))∣∣ is the largest possible absolute difference of the policy entropy. According to Theorem 2, the target policy can be improved by minimizing the KL divergence between the target policy and the guidance policy. Thus we can use the KL divergence as an auxiliary loss to guide target policy learning. Proofs of this section are deferred to Appendix B.1 and Appendix B.2. Theorem 1 and Theorem 2 can be extended to common “hard” value functions (deferred to Appendix B.3), so CUP is also applicable to actor-critic algorithms that uses “hard” Bellman updates, such as A3C (Mnih et al., 2016). 3.2 CUP Framework In this subsection we propose the overall framework of CUP. As shown in Fig. 1, at each iteration t, CUP first forms a guidance policy π̃tg according to Eq. 7, then provides additional guidance to policy search by regularizing the target policy πt+1tar to imitate π̃tg (Wu et al., 2019; Fujimoto & Gu, 2021). Specifically, CUP minimizes the following loss to optimize πt+1tar : LCUP (π t+1 tar ) = Lactor(π t+1 tar ) + Es∼D [ βsDKL ( πt+1tar (·|s) ||π̃tg (·|s) )] , (11) where Lactor is the original actor loss defined in Eq. (3), and βs > 0 is a hyper-parameter controlling the weight of regularization. In practice, we find that using a fixed weight for regularization has two problems. First, it is difficult to balance the scale between Lactor and the regularization term, because Lactor grows as the Q value gets larger. Secondly, a fixed weight cannot reflect the agent’s confidence on π̃tg. For example, when no source policies have positive soft expected advantages, π̃tg = π t tar. Then the agent should not imitate π̃tg anymore, as π̃tg cannot provide any guidance to further improve performance. Noticing that the soft expected advantage serves as a natural confidence measure, we weight the KL divergence with corresponding soft expected advantage at that state: βs = β1 min ( ẼAπttar (s, π̃ t g), β2|Ṽπttar (s)| ) , (12) where ẼAπttar (s, π̃ t g) = Ea∼π̃tg(·|s) [ Q̃πttar (s, a)− α log π t g(a|s)− Ṽπttar (s) ] is the approxi- mated soft expected advantage, β1, β2 > 0 are two hyper-parameters, and Ṽπttar (s) = Ea∼πttar(·|s) [ Q̃πttar (s, a)− α log π t tar(a|s) ] is the approximated soft value function. This adaptive regularization weight automatically balances between the two losses, and ignores the regularization term at states where π̃tg cannot improve over π t tar anymore. We further upper clip the expected advantage with the absolute value of β2Ṽπttar to avoid the agent being overly confident about π̃ t g due to function approximation error ϵ. CUP’s pseudo-code is presented in Alg. 1. The modifications CUP made to SAC are marked in red. Additional implementation details are deferred to Appendix D.1. Algorithm 1 CUP Require: Source policies {π1, π2, ..., πn}, hyper-parameters λθ1 , λθ2 , λπ, λα, τ,H, β1, β2 Initialize replay buffer D Initialize actor πϕ, entropy weight α, critic Qθ1 ,Qθ2 , target networks Qθ1 ← Qθ1 , Qθ2 ← Qθ2 while not done do for each environment step do at ∼ πθ st+1 ∼ p(st+1|st, at) D ← D ∪ {st, at, r(st, at), st+1} end for for each gradient step do Sample minibatch b from D Query source policies’ action probabilities {π1(·|s), π2(·|s), ..., πn(·|s)} for states in b Compute expected advantages according to Eq. (4), form π̃tg according to Eq. (7) θi ← θi − λQ∇̂θiLcritic(Qθi) for i ∈ {1, 2} ϕ← ϕ− λπ∇̂ϕLCUP (πϕ) α← α− λα∇̂αLentropy(α) θi ← τθi + (1− τ)θi for i ∈ {1, 2} end for end while 4 Experiments We evaluate on Meta-World (Yu et al., 2020), a popular reinforcement learning benchmark composed of multiple robot manipulation tasks. These tasks are both correlated (performed by the same Sawyer robot arm) and distinct (interacting with different objects and having different reward functions), and serve as a proper evaluation benchmark for policy reuse. The source policies are achieved by training on three representative tasks: Reach, Push, and Pick-Place. We choose several complex tasks as target tasks, including Hammer, Peg-Insert-Side, Push-Wall, Pick-Place-Wall, Push-Back, and Shelf-Place. Among these target tasks, Hammer and Peg-Insert-Side require interacting with objects unseen in the source tasks. In Push-Wall and Pick-Place-Wall, there is a wall between the object and the goal. In Push-Back, the goal distribution is different from Push. In Shelf-Place, the robot is required to put a block on a shelf, and the shelf is unseen in the source tasks. Video demonstrations of these tasks are available at https://meta-world.github.io/. Similar to the settings in Yang et al. (2020a), in our experiments the goal position is randomly reset at the start of every episode. Codes are available at https://github.com/NagisaZj/CUP. 4.1 Transfer Performance on Meta-World We compare against several representative baseline algorithms, including HAAR (Li et al., 2019), PTF (Yang et al., 2020b), MULTIPOLAR (Barekatain et al., 2021), and MAMBA (Cheng et al., 2020). Among these algorithms, HAAR and PTF learn hierarchical high-level policies over source policies. MAMBA aggregates source policies’ V functions to form a baseline function, and performs policy improvement over the baseline function. MULTIPOLAR learns a weighted sum of source policies’ action probabilities, and learns an additional network to predict residuals. We also compare against the original SAC algorithm. All the results are averaged over six random seeds. As shown in Figure 2, CUP is the only algorithm that achieves efficient transfer on all six tasks, significantly outperforming the original SAC algorithm. HAAR has a jump-start performance on Push-Wall and Pick-Pick-Wall, but fails to further improve due to optimization non-stationarity induced by jointly training high-level and low-level policies. MULTIPOLAR achieves comparable performance on Push-Wall and Peg-Insert-Side, because the Push source policy is useful on Push-Wall (implied by HAAR’s good jump-start performance), and learning residuals on Peg-Insert-Side is easier (implied by SAC’s fast learning). In Pick-Place-Wall, the Pick-Place source policy is useful, but the residual is difficult to learn, so MULTIPOLAR does not work. For the remaining three tasks, the source policies are less useful, and MULTIPOLAR fails on these tasks. PTF fails as its hierarchical policy only gets updated when the agent chooses similar actions to one of the source policies, which is quite rare when the source and target tasks are distinct. MAMBA fails as estimating all source policies’ V functions accurately is sampling inefficient. Algorithm performance evaluated by success rate is deferred to Appendix E.1. 4.2 Analyzing the Guidance Policy This subsection provides visualizations of CUP’s source policy selection. Fig. 3 shows the percentages of each source policy being selected throughout training on Push-Wall. At early stages of training, the source policies are selected more frequently as they have positive expected advantages, which means that they can be used to improve the current target policy. As training proceeds and the target policy becomes better, the source policies are selected less frequently. Among these three source policies, Push is chosen more frequently than the other two source policies, as it is more related to the target task. Figure 4 presents the source policies’ expected advantages over an episode at convergence in Pick-Place-Wall. The Push source policy and Reach source policy almost always have negative expected advantages, which implies that these two source policies can hardly improve the current target policy anymore. Meanwhile, the Pick-Place source policy has expected advantages close to zero after 100 environment steps, which implies that the Pick-Place source policy is close to the target policy at these steps. Analyses on all six tasks as well as analyses on HAAR’s source policy selection are deferred to Appendix E.2 and Appendix E.6, respectively. 4.3 Ablation Study This subsection evaluates CUP’s sensitivity to hyper-parameter settings and the number of source policies. We also evaluate CUP’s robustness against random source policies, which do not provide meaningful candidate actions for solving target tasks. 4.3.1 Hyper-Parameter Sensitivity For all the experiments in Section 4.1, we use the same set of hyper-parameters, which indicates that CUP is generally applicable to a wide range of tasks without particular fine-tuning. CUP introduces only two additional hyper-parameters to the underlying SAC algorithm, and we further test CUP’s sensitivity to these additional hyper-parameters. As shown in Fig. 5, CUP is generally robust to the choice of hyper-parameters and achieves stable performance. 4.3.2 Number of Source Policies We evaluate CUP as well as baseline algorithms on a larger source policy set. We add three policies to the original source policy set, which solve three simple tasks including Drawer-Close, Push-Wall, and Coffee-Button. This forms a source policy set composed of six policies. As shown in Fig. 6, CUP is still the only algorithm that solves all the six target tasks efficiently. MULTIPOLAR suffers from a decrease in performance, which indicates that learning the weighted sum of source policies’ actions becomes more difficult as the number of source policies grows. The rest of the baseline algorithms have similar performance to those using three source policies. Fig. 7 provides a more direct comparison of CUP’s performance with different number of source policies. CUP is able to utilize the additional source policies to further improve its performance, especially on Pick-Place-Wall and Peg-Insert-Side. Further detailed analysis is deferred to Appendix E.3. 4.3.3 Interference of Random Source Policies In order to evaluate the efficiency of CUP’s critic-guided source policy aggregation, we add random policies to the set of source policies. As shown in Fig. 8(a), adding up to 3 random source policies does not affect CUP’s performance. This indicates that CUP can efficiently choose which source policy to follow even if there exist many source policies that are not meaningful. Adding 4 and 5 random source policies leads to a slight drop in performance. This drop is because that as the number of random policies grows, more random actions are sampled, and taking argmax over these actions’ expected advantages is more likely to be affected by errors in value estimation. To further investigate CUP’s ability to ignore unsuitable source policies, we design another transfer setting that consists of another two source policy sets. The first set consists of three random policies that are useless for the target task, and the second set adds the Reach policy to the first set. As demonstrated in Fig. 8(b), when none of the source policies are useful, CUP performs similarly to the original SAC, and its sample efficiency is almost unaffected by the useless source policies. When there exists a useful source policy, CUP can efficiently utilize it to improve performance, even if there are many useless source policies. 5 Related Work Policy reuse. A series of works on policy reuse utilize source policies for exploration in value-based algorithms (Fernández & Veloso, 2006; Li & Zhang, 2018; Gimelfarb et al., 2021), but they are not applicable to policy gradient methods due to the off-policyness problem (Fujimoto et al., 2019). ACTeach (Kurenkov et al., 2020) mitigates this problem by improving the actor over behavior policy’s value estimation, but still fails in more complex tasks. One branch of methods train hierarchical high-level policies over source policies. CAPS (Li et al., 2018) guarantees the optimality of the hierarchical policies by adding primitive skills to the low-level policy set, but is inapplicable to MDPs with continuous action spaces. HAAR (Li et al., 2019) fine-tunes low-level policies to ensure optimality, but joint training of high-level and low-level policies induce optimization non-stationarity (Pateria et al., 2021). PTF (Yang et al., 2020b) trains a hierarchical policy, which is imitated by the target policy. However, the hierarchical policy only gets updated when the target policy chooses similar actions to one of the source policies, so PTF fails in complex tasks with large action spaces. Another branch of works aggregate source policies via their Q functions or V functions on the target task. Barreto et al. (2017) and Barreto et al. (2018) focus on the situation where source tasks and target tasks share the same dynamics, and aggregate source policies by choosing the policy that has the largest Q at each state. They use successor features to mitigate the heavy computation cost brought by estimating Q functions for all source policies. MAMBA (Cheng et al., 2020) forms a baseline function by aggregating source policies’ V functions, and guides policy search by improving the policy over the baseline function. Finally, MULTIPOLAR (Barekatain et al., 2021) learns a weighted sum over source policies’ actions, and learns an auxiliary network to predict residuals around the aggregated actions. MULTIPOLAR is computationally expensive, as it requires querying all the source policies at every sampling step. Our proposed method, CUP, focuses on the setting of learning continuous-action MDPs with actor-critic methods. CUP is both computationally and sampling efficient, as it does not require training any additional components. Policy regularization. Adding regularization to policy optimization is a common approach to induce prior knowledge into policy learning. Distral (Teh et al., 2017) achieves inter-task transfer by imitating an average policy distilled from policies of related tasks. In offline RL, policy regularization serves as a common technique to keep the policy close to the behavior policy used to collect the dataset (Wu et al., 2019; Nair et al., 2020; Fujimoto & Gu, 2021). CUP uses policy regularization as a means to provide additional guidance to policy search with the guidance policy. 6 Conclusion In this study, we address the problem of reusing source policies without training any additional components. By utilizing the critic as a natural evaluation of source policies, we propose CUP, an efficient policy reuse algorithm without training any additional components. CUP is conceptually simple, easy to implement, and has theoretical guarantees. Empirical results demonstrate that CUP achieves efficient transfer on a wide range of tasks. As for future work, CUP assumes that all source policies and the target policy share the same state and action spaces, which limits CUP’s application to more general scenarios. One possible future direction is to take inspiration from previous works that map the state and action spaces of an MDP to another MDP with similar high-level structure (Wan et al., 2020; Zhang et al., 2020; Heng et al., 2022; van der Pol et al., 2020b,a). Another interesting direction is to incorporate CUP into the continual learning setting (Rolnick et al., 2019; Khetarpal et al., 2020), in which an agent gradually enriches its source policy set in an online manner. Acknowledgements This work is supported in part by Science and Technology Innovation 2030 – “New Generation Artificial Intelligence” Major Project (No. 2018AAA0100904), National Natural Science Foundation of China (62176135), and China Academy of Launch Vehicle Technology (CALT2022-18).
1. What is the focus and contribution of the paper regarding transfer learning in reinforcement learning? 2. What are the strengths and weaknesses of the proposed algorithm, particularly in terms of its novelty and practical performance? 3. Do you have any concerns or questions regarding the experimental results and their implications? 4. How does the reviewer assess the broader limitations and potential avenues for future improvement in the field? 5. Are there any specific aspects of the paper that the reviewer finds surprising or unclear, such as the usage of guide policies or the performance on certain tasks?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper CUP is an algorithm for re-using previously learned policies to guide the training of a new policy on a different-but-related task. CUP does this by selecting a single guide policy from among the library of pretrained policies at each step, which the student policy is then trained to behave similar to using a KL divergence regularization term. Strengths And Weaknesses Overall, I liked this paper. The algorithm is novel, clearly presented, and conceptually simple (a plus). The experiments provide reasonable evidence that CUP improves performance compared to both baseline and alternative teacher-student transfer algorithms. I do have a few concerns, though I don't think any of this invalidates the results presented: -If I understand correctly, taking an argmax among policies using a partially-trained value function seems prone to bias/error magnification. Given relatively poor Q estimates of equal magnitude for each policy's sampled actions (as can happen early in training), the guide policy selected will tend to be the one that samples actions which Q is most (unrealistically) optimistic about. Further, the estimated advantage KL term weighting makes these updates larger. I think the authors appreciate this, hence their value function upper bound on the weighting term and not using the KL term for the first 0.5M environment steps (as per A.4), and the result is an algorithm that works in practice (as shown by the experiments). That said, it does make me wonder how well performance will hold up as the difference between source and target tasks increases (where many actions sampled by source policies will be bad for any given state). The random-source-policy ablation sort-of tests this, but still assumes a subset of source policies are relatively high-performing. Basically, can CUP be used to gain a training benefit from weak teacher policies? -Relatedly, the bound in theorem 2 is dependent on the difference between source and target policies( as well as reward magnitude), and could be a very large bound given adversarial values. I'm willing to accept that this isn't an issue in practice (at least for Metaworld), but I'm curious to see how those factors impact empirical performance. -Connected to the above two points, while it may be something of a stereotype for reviewers to ask for more experiments, additional experiments on other Metaworld tasks (either the full suite of 50 or select "hard" tasks that are less similar to the source tasks) would improve the paper in a worthwhile way. Ideally I'd like to have some qualitative evidence for how different aspects of source versus target tasks affect performance for CUP. In the current results it looks like CUP improves over SAC less on Hammer and Peg-Insert-Side, the two "more novel" tasks, but without more tasks or deeper analysis it's hard to say anything conclusive. Questions The conclusion section of the paper is pretty limited. I appreciate how space is limited, but some discussion of broader limitations and possible avenues for future improvement would be nice if space can be found. In Figure 3, I'm surprised how little CUP seems to use any guide policy throughout training. I'm not sure offhand how to tap into it, but this seems like it might be a sign of leaving performance on the table? The least trained target policy is only getting updated with the KL term about half the time on a task where the push guide policy should be highly informative. Related to that, I wonder what the percentages would be if training on one of the source tasks? For example, would the push policy get used more if training on the push task? It could provide a useful indicator for whether there's more to be gained from the source policies. Limitations I included discussion and suggestions for limitations in the previous sections, and while I'd like to see more "hard" test cases the existing experiments do provide some idea of the limitations of CUP. The potential for negative social impact from this work is limited but is addressed in the appendix.
NIPS
Title CUP: Critic-Guided Policy Reuse Abstract The ability to reuse previous policies is an important aspect of human intelligence. To achieve efficient policy reuse, a Deep Reinforcement Learning (DRL) agent needs to decide when to reuse and which source policies to reuse. Previous methods solve this problem by introducing extra components to the underlying algorithm, such as hierarchical high-level policies over source policies, or estimations of source policies’ value functions on the target task. However, training these components induces either optimization non-stationarity or heavy sampling cost, significantly impairing the effectiveness of transfer. To tackle this problem, we propose a novel policy reuse algorithm called Critic-gUided Policy reuse (CUP), which avoids training any extra components and efficiently reuses source policies. CUP utilizes the critic, a common component in actor-critic methods, to evaluate and choose source policies. At each state, CUP chooses the source policy that has the largest one-step improvement over the current target policy, and forms a guidance policy. The guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy. Then the target policy is regularized to imitate the guidance policy to perform efficient policy search. Empirical results demonstrate that CUP achieves efficient transfer and significantly outperforms baseline algorithms. 1 Introduction Human intelligence can solve new tasks quickly by reusing previous policies (Guberman & Greenfield, 1991). Despite remarkable success, current Deep Reinforcement Learning (DRL) agents lack this knowledge transfer ability (Silver et al., 2017; Vinyals et al., 2019; Ceron & Castro, 2021), leading to enormous computation and sampling cost. As a consequence, a large number of works have been studying the problem of policy reuse in DRL, i.e., how to efficiently reuse source policies to speed up target policy learning (Fernández & Veloso, 2006; Barreto et al., 2018; Li et al., 2019; Yang et al., 2020b). A fundamental challenge towards policy reuse is: how does an agent with access to multiple source policies decide when and where to use them (Fernández & Veloso, 2006; Kurenkov et al., 2020; Cheng et al., 2020)? Previous methods solve this problem by introducing additional components to the underlying DRL algorithm, such as hierarchical high-level policies over source policies (Li et al., 2018, 2019; Yang et al., 2020b), or estimations of source policies’ value functions on the target task (Barreto et al., 2017, 2018; Cheng et al., 2020). However, training these components significantly impairs the effectiveness of transfer, as hierarchical structures induce optimization non-stationarity (Pateria et al., 2021), and estimating the value functions for every source policy is computationally expensive and with high sampling cost. Thus, the objective of this study is to address the question: 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Can we achieve efficient transfer without training additional components? Notice that actor-critic methods (Lillicrap et al., 2016; Fujimoto et al., 2018; Haarnoja et al., 2018) learn a critic that approximates the actor’s Q function and serves as a natural way to evaluate policies. Based on this observation, we propose a novel policy reuse algorithm that utilizes the critic to choose source policies. The proposed algorithm, called Critic-gUided Policy reuse (CUP), avoids training any additional components and achieves efficient transfer. At each state, CUP chooses the source policy that has the largest one-step improvement over the current target policy, thus forming a guidance policy. Then CUP guides learning by regularizing the target policy to imitate the guidance policy. This approach has the following advantages. First, the one-step improvement can be estimated simply by querying the critic, and no additional components are needed to be trained. Secondly, the guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy, which ensures that CUP can reuse the source policies to improve the current target policy. Finally, CUP is conceptually simple and easy to implement, introducing very few hyper-parameters to the underlying algorithm. We evaluate CUP on Meta-World (Yu et al., 2020), a popular reinforcement learning benchmark composed of multiple robot arm manipulation tasks. Empirical results demonstrate that CUP achieves efficient transfer and significantly outperforms baseline algorithms. 2 Preliminaries Reinforcement learning (RL) deals with Markov Decision Processes (MDPs). A MDP can be modelled by a tuple (S,A, r, p, γ), with state space S, action space A, reward function r(s, a), transition function p(s′|s, a), and discount factor γ (Sutton & Barto, 2018). In this study, we focus on MDPs with continuous action spaces. RL’s objective is to find a policy π(a|s) that maximizes the cumulative discounted return R(π) = Eπ [ ∑∞ t=0 γ tr(st, at)]. While CUP is generally applicable to a wide range of actor-critic algorithms, in this work we use SAC (Haarnoja et al., 2018) as the underlying algorithm. The soft Q function and soft V function (Haarnoja et al., 2017) of a policy π are defined as: Qπ(s, a) = r(s, a) + γEs′∼p(·|s,a) [Vπ(s)] (1) Vπ(s) = Ea∼π(·|s) [Qπ(s, a)− α log π(a|s)] , (2) where α > 0 is the entropy weight. SAC’s loss functions are defined as: Lcritic(Qθ) = E(s,a,r,s′)∼D [ Qθ(s, a)− (r + γVθ(s ′)) ]2 Lactor(πϕ) = Es∼D [ Ea∼πϕ(·|s) [α log πϕ(a|s)−Qθ(s, a)] ] Lentropy(α) = Es∼D [ Ea∼πϕ(·|s) [ −α log πϕ(a|s)− αH ]] , (3) where D is the replay buffer, H is a hyper-parameter representing the target entropy, θ and ϕ are network parameters, θ is target network’s parameters, and Vθ(s) = Ea∼π(a|s)[Qθ(s, a) − α log π(a|s)] is the target soft value function. We define the soft expected advantage of action probability distribution πi(·|s) over policy πj at state s as: EAπj (s, πi) = Ea∼πi(·|s) [ Qπj (s, a)− α log πi(a|s)− Vπj (s) ] . (4) EAπj (s, πi) measures the one-step performance improvement brought by following πi instead of πj at state s, and following πj afterwards. The field of policy reuse focuses on solving a target MDP M efficiently by transferring knowledge from a set of source policies {π1, π2, ..., πn}. We denote the target policy learned on M at iteration t as πttar, and its corresponding soft Q function as Qπttar . In this work, we assume that the source policies and the target policy share the same state and action spaces. 3 Critic-Guided Policy Reuse This section presents CUP, an efficient policy reuse algorithm that does not require training any additional components. CUP is built upon actor-critic methods. In each iteration, CUP uses the critic to form a guidance policy from the source policies and the current target policy. Then CUP guides policy search by regularizing the target policy to imitate the guidance policy. Section 3.1 presents how to form a guidance policy by aggregating source policies through the critic, and proves that the guidance policy is guaranteed to be a monotonic improvement over the current target policy. We also prove that the target policy is theoretically guaranteed to improve by imitating the guidance policy. Section 3.2 presents the overall framework of CUP. 3.1 Critic-Guided Source Policy Aggregation CUP utilizes action probabilities proposed by source policies to improve the current target policy, and forms a guidance policy. At iteration t of target policy learning, for each state s, the agent has access to a set of candidate action probability distributions proposed by the n source policies and the current target policy: Πst = {π1(·|s), π2(·|s), ..., πn(·|s), πttar(·|s)}. The guidance policy πtg can be formed by combining the action probability distributions that have the largest soft expected advantage over πttar at each state s: πtg(·|s) = argmax π(·|s)∈Πst EAπttar (s, π) = argmax π(·|s)∈Πst Ea∼π(·|s) [ Qπttar (s, a)− α log π(a|s) ] for all s ∈ S. (5) The second equation holds as adding Vπttar (s) to all soft expected advantages does not affect the result of the argmax operator. Eq. 5 implies that at each state, we can choose which source policy to follow simply by querying its expected soft Q value under πttar. Noticing that with function approximation, the exact soft Q value cannot be acquired. The following theorem enables us to form the guidance policy with an approximated soft Q function, and guarantees that the guidance policy is a monotonic improvement over the current target policy. Theorem 1 Let Q̃πttar be an approximation of Qπttar such that |Q̃πttar (s, a)−Qπttar (s, a)| ≤ ϵ for all s ∈ S, a ∈ A. (6) Define π̃tg(·|s) = argmax π(·|s)∈Πst Ea∼π(·|s) [ Q̃πttar (s, a)− α log π(a|s) ] for all s ∈ S. (7) Then, V π̃tg (s) ≥ Vπttar (s)− 2ϵ 1− γ for all s ∈ S. (8) Theorem 1 provides a way to choose source policies using an approximation of the current target policy’s soft Q value. As SAC learns such an approximation, the guidance policy can be formed without training any additional components. The next question is, how to incorporate the guidance policy π̃tg into target policy learning? The following theorem demonstrates that policy improvement can be guaranteed if the target policy is optimized to stay close to the guidance policy. Theorem 2 If DKL ( πt+1tar (·|s)||π̃tg(·|s) ) ≤ δ for all s ∈ S, (9) then Vπt+1tar (s) ≥ Vπttar (s)− √ 2 ln 2δ(R̃max + αHt+1max) (1− γ)2 − 2ϵ+ αH̃max 1− γ for all s ∈ S, (10) where R̃max = max s,a |r(s, a)| is the largest possible absolute value of the reward, Ht+1max = max s H(πt+1tar (·|s)) is the largest entropy of πt+1tar , and H̃max = max s ∣∣H(πttar(·|s))−H(πt+1tar (·|s))∣∣ is the largest possible absolute difference of the policy entropy. According to Theorem 2, the target policy can be improved by minimizing the KL divergence between the target policy and the guidance policy. Thus we can use the KL divergence as an auxiliary loss to guide target policy learning. Proofs of this section are deferred to Appendix B.1 and Appendix B.2. Theorem 1 and Theorem 2 can be extended to common “hard” value functions (deferred to Appendix B.3), so CUP is also applicable to actor-critic algorithms that uses “hard” Bellman updates, such as A3C (Mnih et al., 2016). 3.2 CUP Framework In this subsection we propose the overall framework of CUP. As shown in Fig. 1, at each iteration t, CUP first forms a guidance policy π̃tg according to Eq. 7, then provides additional guidance to policy search by regularizing the target policy πt+1tar to imitate π̃tg (Wu et al., 2019; Fujimoto & Gu, 2021). Specifically, CUP minimizes the following loss to optimize πt+1tar : LCUP (π t+1 tar ) = Lactor(π t+1 tar ) + Es∼D [ βsDKL ( πt+1tar (·|s) ||π̃tg (·|s) )] , (11) where Lactor is the original actor loss defined in Eq. (3), and βs > 0 is a hyper-parameter controlling the weight of regularization. In practice, we find that using a fixed weight for regularization has two problems. First, it is difficult to balance the scale between Lactor and the regularization term, because Lactor grows as the Q value gets larger. Secondly, a fixed weight cannot reflect the agent’s confidence on π̃tg. For example, when no source policies have positive soft expected advantages, π̃tg = π t tar. Then the agent should not imitate π̃tg anymore, as π̃tg cannot provide any guidance to further improve performance. Noticing that the soft expected advantage serves as a natural confidence measure, we weight the KL divergence with corresponding soft expected advantage at that state: βs = β1 min ( ẼAπttar (s, π̃ t g), β2|Ṽπttar (s)| ) , (12) where ẼAπttar (s, π̃ t g) = Ea∼π̃tg(·|s) [ Q̃πttar (s, a)− α log π t g(a|s)− Ṽπttar (s) ] is the approxi- mated soft expected advantage, β1, β2 > 0 are two hyper-parameters, and Ṽπttar (s) = Ea∼πttar(·|s) [ Q̃πttar (s, a)− α log π t tar(a|s) ] is the approximated soft value function. This adaptive regularization weight automatically balances between the two losses, and ignores the regularization term at states where π̃tg cannot improve over π t tar anymore. We further upper clip the expected advantage with the absolute value of β2Ṽπttar to avoid the agent being overly confident about π̃ t g due to function approximation error ϵ. CUP’s pseudo-code is presented in Alg. 1. The modifications CUP made to SAC are marked in red. Additional implementation details are deferred to Appendix D.1. Algorithm 1 CUP Require: Source policies {π1, π2, ..., πn}, hyper-parameters λθ1 , λθ2 , λπ, λα, τ,H, β1, β2 Initialize replay buffer D Initialize actor πϕ, entropy weight α, critic Qθ1 ,Qθ2 , target networks Qθ1 ← Qθ1 , Qθ2 ← Qθ2 while not done do for each environment step do at ∼ πθ st+1 ∼ p(st+1|st, at) D ← D ∪ {st, at, r(st, at), st+1} end for for each gradient step do Sample minibatch b from D Query source policies’ action probabilities {π1(·|s), π2(·|s), ..., πn(·|s)} for states in b Compute expected advantages according to Eq. (4), form π̃tg according to Eq. (7) θi ← θi − λQ∇̂θiLcritic(Qθi) for i ∈ {1, 2} ϕ← ϕ− λπ∇̂ϕLCUP (πϕ) α← α− λα∇̂αLentropy(α) θi ← τθi + (1− τ)θi for i ∈ {1, 2} end for end while 4 Experiments We evaluate on Meta-World (Yu et al., 2020), a popular reinforcement learning benchmark composed of multiple robot manipulation tasks. These tasks are both correlated (performed by the same Sawyer robot arm) and distinct (interacting with different objects and having different reward functions), and serve as a proper evaluation benchmark for policy reuse. The source policies are achieved by training on three representative tasks: Reach, Push, and Pick-Place. We choose several complex tasks as target tasks, including Hammer, Peg-Insert-Side, Push-Wall, Pick-Place-Wall, Push-Back, and Shelf-Place. Among these target tasks, Hammer and Peg-Insert-Side require interacting with objects unseen in the source tasks. In Push-Wall and Pick-Place-Wall, there is a wall between the object and the goal. In Push-Back, the goal distribution is different from Push. In Shelf-Place, the robot is required to put a block on a shelf, and the shelf is unseen in the source tasks. Video demonstrations of these tasks are available at https://meta-world.github.io/. Similar to the settings in Yang et al. (2020a), in our experiments the goal position is randomly reset at the start of every episode. Codes are available at https://github.com/NagisaZj/CUP. 4.1 Transfer Performance on Meta-World We compare against several representative baseline algorithms, including HAAR (Li et al., 2019), PTF (Yang et al., 2020b), MULTIPOLAR (Barekatain et al., 2021), and MAMBA (Cheng et al., 2020). Among these algorithms, HAAR and PTF learn hierarchical high-level policies over source policies. MAMBA aggregates source policies’ V functions to form a baseline function, and performs policy improvement over the baseline function. MULTIPOLAR learns a weighted sum of source policies’ action probabilities, and learns an additional network to predict residuals. We also compare against the original SAC algorithm. All the results are averaged over six random seeds. As shown in Figure 2, CUP is the only algorithm that achieves efficient transfer on all six tasks, significantly outperforming the original SAC algorithm. HAAR has a jump-start performance on Push-Wall and Pick-Pick-Wall, but fails to further improve due to optimization non-stationarity induced by jointly training high-level and low-level policies. MULTIPOLAR achieves comparable performance on Push-Wall and Peg-Insert-Side, because the Push source policy is useful on Push-Wall (implied by HAAR’s good jump-start performance), and learning residuals on Peg-Insert-Side is easier (implied by SAC’s fast learning). In Pick-Place-Wall, the Pick-Place source policy is useful, but the residual is difficult to learn, so MULTIPOLAR does not work. For the remaining three tasks, the source policies are less useful, and MULTIPOLAR fails on these tasks. PTF fails as its hierarchical policy only gets updated when the agent chooses similar actions to one of the source policies, which is quite rare when the source and target tasks are distinct. MAMBA fails as estimating all source policies’ V functions accurately is sampling inefficient. Algorithm performance evaluated by success rate is deferred to Appendix E.1. 4.2 Analyzing the Guidance Policy This subsection provides visualizations of CUP’s source policy selection. Fig. 3 shows the percentages of each source policy being selected throughout training on Push-Wall. At early stages of training, the source policies are selected more frequently as they have positive expected advantages, which means that they can be used to improve the current target policy. As training proceeds and the target policy becomes better, the source policies are selected less frequently. Among these three source policies, Push is chosen more frequently than the other two source policies, as it is more related to the target task. Figure 4 presents the source policies’ expected advantages over an episode at convergence in Pick-Place-Wall. The Push source policy and Reach source policy almost always have negative expected advantages, which implies that these two source policies can hardly improve the current target policy anymore. Meanwhile, the Pick-Place source policy has expected advantages close to zero after 100 environment steps, which implies that the Pick-Place source policy is close to the target policy at these steps. Analyses on all six tasks as well as analyses on HAAR’s source policy selection are deferred to Appendix E.2 and Appendix E.6, respectively. 4.3 Ablation Study This subsection evaluates CUP’s sensitivity to hyper-parameter settings and the number of source policies. We also evaluate CUP’s robustness against random source policies, which do not provide meaningful candidate actions for solving target tasks. 4.3.1 Hyper-Parameter Sensitivity For all the experiments in Section 4.1, we use the same set of hyper-parameters, which indicates that CUP is generally applicable to a wide range of tasks without particular fine-tuning. CUP introduces only two additional hyper-parameters to the underlying SAC algorithm, and we further test CUP’s sensitivity to these additional hyper-parameters. As shown in Fig. 5, CUP is generally robust to the choice of hyper-parameters and achieves stable performance. 4.3.2 Number of Source Policies We evaluate CUP as well as baseline algorithms on a larger source policy set. We add three policies to the original source policy set, which solve three simple tasks including Drawer-Close, Push-Wall, and Coffee-Button. This forms a source policy set composed of six policies. As shown in Fig. 6, CUP is still the only algorithm that solves all the six target tasks efficiently. MULTIPOLAR suffers from a decrease in performance, which indicates that learning the weighted sum of source policies’ actions becomes more difficult as the number of source policies grows. The rest of the baseline algorithms have similar performance to those using three source policies. Fig. 7 provides a more direct comparison of CUP’s performance with different number of source policies. CUP is able to utilize the additional source policies to further improve its performance, especially on Pick-Place-Wall and Peg-Insert-Side. Further detailed analysis is deferred to Appendix E.3. 4.3.3 Interference of Random Source Policies In order to evaluate the efficiency of CUP’s critic-guided source policy aggregation, we add random policies to the set of source policies. As shown in Fig. 8(a), adding up to 3 random source policies does not affect CUP’s performance. This indicates that CUP can efficiently choose which source policy to follow even if there exist many source policies that are not meaningful. Adding 4 and 5 random source policies leads to a slight drop in performance. This drop is because that as the number of random policies grows, more random actions are sampled, and taking argmax over these actions’ expected advantages is more likely to be affected by errors in value estimation. To further investigate CUP’s ability to ignore unsuitable source policies, we design another transfer setting that consists of another two source policy sets. The first set consists of three random policies that are useless for the target task, and the second set adds the Reach policy to the first set. As demonstrated in Fig. 8(b), when none of the source policies are useful, CUP performs similarly to the original SAC, and its sample efficiency is almost unaffected by the useless source policies. When there exists a useful source policy, CUP can efficiently utilize it to improve performance, even if there are many useless source policies. 5 Related Work Policy reuse. A series of works on policy reuse utilize source policies for exploration in value-based algorithms (Fernández & Veloso, 2006; Li & Zhang, 2018; Gimelfarb et al., 2021), but they are not applicable to policy gradient methods due to the off-policyness problem (Fujimoto et al., 2019). ACTeach (Kurenkov et al., 2020) mitigates this problem by improving the actor over behavior policy’s value estimation, but still fails in more complex tasks. One branch of methods train hierarchical high-level policies over source policies. CAPS (Li et al., 2018) guarantees the optimality of the hierarchical policies by adding primitive skills to the low-level policy set, but is inapplicable to MDPs with continuous action spaces. HAAR (Li et al., 2019) fine-tunes low-level policies to ensure optimality, but joint training of high-level and low-level policies induce optimization non-stationarity (Pateria et al., 2021). PTF (Yang et al., 2020b) trains a hierarchical policy, which is imitated by the target policy. However, the hierarchical policy only gets updated when the target policy chooses similar actions to one of the source policies, so PTF fails in complex tasks with large action spaces. Another branch of works aggregate source policies via their Q functions or V functions on the target task. Barreto et al. (2017) and Barreto et al. (2018) focus on the situation where source tasks and target tasks share the same dynamics, and aggregate source policies by choosing the policy that has the largest Q at each state. They use successor features to mitigate the heavy computation cost brought by estimating Q functions for all source policies. MAMBA (Cheng et al., 2020) forms a baseline function by aggregating source policies’ V functions, and guides policy search by improving the policy over the baseline function. Finally, MULTIPOLAR (Barekatain et al., 2021) learns a weighted sum over source policies’ actions, and learns an auxiliary network to predict residuals around the aggregated actions. MULTIPOLAR is computationally expensive, as it requires querying all the source policies at every sampling step. Our proposed method, CUP, focuses on the setting of learning continuous-action MDPs with actor-critic methods. CUP is both computationally and sampling efficient, as it does not require training any additional components. Policy regularization. Adding regularization to policy optimization is a common approach to induce prior knowledge into policy learning. Distral (Teh et al., 2017) achieves inter-task transfer by imitating an average policy distilled from policies of related tasks. In offline RL, policy regularization serves as a common technique to keep the policy close to the behavior policy used to collect the dataset (Wu et al., 2019; Nair et al., 2020; Fujimoto & Gu, 2021). CUP uses policy regularization as a means to provide additional guidance to policy search with the guidance policy. 6 Conclusion In this study, we address the problem of reusing source policies without training any additional components. By utilizing the critic as a natural evaluation of source policies, we propose CUP, an efficient policy reuse algorithm without training any additional components. CUP is conceptually simple, easy to implement, and has theoretical guarantees. Empirical results demonstrate that CUP achieves efficient transfer on a wide range of tasks. As for future work, CUP assumes that all source policies and the target policy share the same state and action spaces, which limits CUP’s application to more general scenarios. One possible future direction is to take inspiration from previous works that map the state and action spaces of an MDP to another MDP with similar high-level structure (Wan et al., 2020; Zhang et al., 2020; Heng et al., 2022; van der Pol et al., 2020b,a). Another interesting direction is to incorporate CUP into the continual learning setting (Rolnick et al., 2019; Khetarpal et al., 2020), in which an agent gradually enriches its source policy set in an online manner. Acknowledgements This work is supported in part by Science and Technology Innovation 2030 – “New Generation Artificial Intelligence” Major Project (No. 2018AAA0100904), National Natural Science Foundation of China (62176135), and China Academy of Launch Vehicle Technology (CALT2022-18).
1. What is the focus and contribution of the paper on efficient policy reuse? 2. What are the strengths of the proposed approach, particularly in its simplicity and straightforwardness? 3. What are the weaknesses of the paper regarding the choice of hyperparameters and their impact on the method's performance? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the comparison with other works and the advantage of distilling source policies into a single policy?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper is meant to achieve efficient policy reuse for resolving complex tasks. Specifically, they introduce Critic-gUided Policy reuse (CUP), evaluating and choosing appropriate source policies to regularize the training on the target tasks. Experiments shows convincing improvements over kinds of baselines Strengths And Weaknesses Strength: The topic of reusing simple source policies for resolving complex tasks is important. The paper is clearly stated, well written, and easy to follow. The proposed method is intuitive, simple and straightforward. The experiment comparison is strong and convincing. Weakness: It seems that the proposed method tends to be affected by the chosen of hyper-parameters. Although the authors show in Figure 5 that “CUP performs well on a wide range of hyper-parameters”, it is not quite a large range. From the formulation, I find it will be a little tricky to tune the two β s to reach a balanced imitation. List all hyper-parameters used in your experiments and provide both default and more guidance to the hyper-parameter settings will help relieve this concern. I am willing to vote for an accept to this paper, but I would like to do so after the authors can relieve my concern about the hyper parameters and completeness (also see Questions below). === After the first round of rebuttal, the author addressed most of my concerns, and I am increasing my score to 5. === After the second round of rebuttal, the author further addressed my concerns, and I am increasing my score to 6. Questions You mentioned that “MULTIPOLAR fails in more complex tasks” but the algorithm works in the last two figures. So what is the order of difficulty of these six tasks? Why not provide the percentages and expected advantages figure for all tasks (in the Appendix)? Better include all figures in the Appendix for completeness. It seems the proposed method reuses the source policies by distilling them into a single one. Can the author provide more discussion about the advantage (e.g., stability, intuition, efficiency) to do so compared to those HRL works who learned to choose different source policies? Why distill all knowledge into a single policy a better idea? Limitations The authors have addressed their limitations.
NIPS
Title Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning Abstract Model-based reinforcement learning (RL) has shown great potential in various control tasks in terms of both sample-efficiency and final performance. However, learning a generalizable dynamics model robust to changes in dynamics remains a challenge since the target transition dynamics follow a multi-modal distribution. In this paper, we present a new model-based RL algorithm, coined trajectory-wise multiple choice learning, that learns a multi-headed dynamics model for dynamics generalization. The main idea is updating the most accurate prediction head to specialize each head in certain environments with similar dynamics, i.e., clustering environments. Moreover, we incorporate context learning, which encodes dynamicsspecific information from past experiences into the context latent vector, enabling the model to perform online adaptation to unseen environments. Finally, to utilize the specialized prediction heads more effectively, we propose an adaptive planning method, which selects the most accurate prediction head over a recent experience. Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods. Source code and videos are available at https://sites.google.com/view/trajectory-mcl. 1 Introduction Deep reinforcement learning (RL) has exhibited wide success in solving sequential decision-making problems [23, 39, 45]. Early successful deep RL approaches had been mostly model-free, which do not require an explicit model of the environment, but instead directly learn a policy [25, 28, 38]. However, despite the strong asymptotic performance, the applications of model-free RL have largely been limited to simulated domains due to its high sample complexity. For this reason, model-based RL has been gaining considerable attention as a sample-efficient alternative, with an eye towards robotics and other physics domains. The increased sample-efficiency of model-based RL algorithms is obtained by exploiting the structure of the problem: first the agent learns a predictive model of the environment, and then plans ahead with the learned model [1, 37, 42]. Recently, substantial progress has been made on the sample-efficiency of model-based RL algorithms [5, 7, 22, 23, 24]. However, it has been evidenced that model-based RL algorithms are not robust to changes in the dynamics [20, 30], i.e., dynamics models fail to provide accurate predictions as the transition dynamics of environments change. This makes model-based RL algorithms unreliable to be deployed into real-world environments where partially unspecified dynamics are common; for instance, a deployed robot might not know a priori various features of the terrain it has to navigate. ∗Equal Contribution. Correspondence to {[email protected], [email protected]} 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. ar X iv :2 01 0. 13 30 3v 1 [ cs .L G ] 2 6 O ct 2 As a motivating example, we visualize the next states obtained by crippling one of the legs of an ant robot (see Figure 1a). Figure 1b shows that the target transition dynamics follow a multimodal distribution, where each mode corresponds to each leg of a robot, even though the original environment has deterministic transition dynamics. This implies that a model-based RL algorithm that can approximate the multi-modal distribution is required to develop a reliable and robust agent against changes in the dynamics. Several algorithms have been proposed to tackle this problem, e.g., learning contextual information to capture local dynamics [20], fine-tuning model parameters for fast adaptation [30]. These algorithms, however, are limited in that they do not explicitly learn dynamics models that can approximate the multi-modal distribution of transition dynamics. Contribution. In this paper, we present a new model-based RL algorithm, coined trajectory-wise multiple choice learning (T-MCL), that can approximate the multi-modal distribution of transition dynamics in an unsupervised manner. To this end, we introduce a novel loss function, trajectory-wise oracle loss, for learning a multi-headed dynamics model where each prediction head specializes in different environments (see Figure 2a). By updating the most accurate prediction head over a trajectory segment (see Figure 2b), we discover that specialized prediction heads emerge automatically. Namely, our method can effectively cluster environments without any prior knowledge of environments. To further enable the model to perform online adaptation to unseen environments, we also incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector and provides it as an additional input to prediction heads (see Figure 2a). Finally, to utilize the specialized prediction heads more effectively, we propose adaptive planning that selects actions using the most accurate prediction head over a recent experience, which can be interpreted as finding the nearest cluster to the current environment (see Figure 2c). We demonstrate the effectiveness of T-MCL on various control tasks from OpenAI Gym [3]. For evaluation, we measure the generalization performance of model-based RL agents on unseen (yet related) environments with different transition dynamics. In our experiments, T-MCL exhibits superior generalization performance compared to existing model-based RL methods [4, 20, 30]. For example, compared to CaDM [20], a state-of-the-art model-based RL method for dynamics generalization, our method obtains 3.5x higher average return on the CrippledAnt environment. 2 Related work Model-based reinforcement learning. By learning a forward dynamics model that approximates the transition dynamics of environments, model-based RL attains a superior sample-efficiency. Such a learned dynamics model can be used as a simulator for model-free RL methods [16, 18, 40], providing a prior or additional features to a policy [9, 47], or planning ahead to select actions by predicting the future consequences of actions [1, 22, 42]. A major challenge in model-based RL is to learn accurate dynamics models that can provide correct future predictions. To this end, numerous methods thus have been proposed, including ensembles [4] and latent dynamics models [14, 15, 37]. While these methods have made significant progress even in complex domains [15, 37], dynamics models still struggle to provide accurate predictions on unseen environments [20, 30]. <latexit sha1_base64="JUwT8/gTyru9SrG2drc6lK20z7s=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74STu7nfeeLaiEQ94jTlQUxHSkSCUbSSTwc5zgbVmlt3FyDrxCtIDQq0BtWv/jBhWcwVMkmN6XluikFONQom+azSzwxPKZvQEe9ZqmjMTZAvjp2RC6sMSZRoWwrJQv09kdPYmGkc2s6Y4tisenPxP6+XYXQT5EKlGXLFlouiTBJMyPxzMhSaM5RTSyjTwt5K2JhqytDmU7EheKsvr5P2Vd1r1BsPjVrztoijDGdwDpfgwTU04R5a4AMDAc/wCm+Ocl6cd+dj2VpyiplT+APn8wcVMo7f</latexit> <latexit sha1_base64="CN/x1zI9SF064ojut25WYcKBezg=">AAAB+HicbVDLSsNAFL3xWeujUZduBovgqiRSsMuCGxcuKtgHtCFMppN26GQSZiZCDfkSNy4UceunuPNvnLRZaOuBgcM593LPnCDhTGnH+bY2Nre2d3Yre9X9g8Ojmn180lNxKgntkpjHchBgRTkTtKuZ5nSQSIqjgNN+MLsp/P4jlYrF4kHPE+pFeCJYyAjWRvLt2ijCekowz+5yP3Nz3647DWcBtE7cktShRMe3v0bjmKQRFZpwrNTQdRLtZVhqRjjNq6NU0QSTGZ7QoaECR1R52SJ4ji6MMkZhLM0TGi3U3xsZjpSaR4GZLGKqVa8Q//OGqQ5bXsZEkmoqyPJQmHKkY1S0gMZMUqL53BBMJDNZEZliiYk2XVVNCe7ql9dJ76rhNhvN+2a93SrrqMAZnMMluHANbbiFDnSBQArP8Apv1pP1Yr1bH8vRDavcOYU/sD5/APJjk0A=</latexit> <latexit sha1_base64="eJteVk3jzTJdLreT95xXMrghIyU=">AAACIXicdVBLS8NAGNzUV62vqEcvi0XwICWJKdZb0YsHDxXsA9pQNttNu3TzYHcjlJC/4sW/4sWDIr2Jf8ZN2oCKDiwMM/PtfjtuxKiQhvGhlVZW19Y3ypuVre2d3T19/6Ajwphj0sYhC3nPRYIwGpC2pJKRXsQJ8l1Guu70OvO7D4QLGgb3chYRx0fjgHoUI6mkod5IBvklfT52ncSo1Rt1q26d5eTcthfEMi/TgY/kBCOW3KbDxErToV4t0rBIwyINzZqRowqWaA31+WAU4tgngcQMCdE3jUg6CeKSYkbSyiAWJEJ4isakr2iAfCKcJF8uhSdKGUEv5OoEEubq94kE+ULMfFclsz3Fby8T//L6sfQaTkKDKJYkwIuHvJhBGcKsLjiinGDJZoogzKnaFeIJ4ghLVWpFlVD8FP5POlbNtGv2nV1tXi3rKIMjcAxOgQkuQBPcgBZoAwwewTN4BW/ak/aivWvzRbSkLWcOwQ9on1+SJ6DB</latexit> <latexit sha1_base64="a7txUJr+FjoRJKxtux2ljCScSjE=">AAACIXicdVBLS8NAGNzUV62vqkcvi0XwICFpK423ghcPHirYB6ShbLabdunmwe5GKCF/xYt/xYsHRXoT/4ybtAEVHVgYZubb/XbciFEhDeNDK62tb2xulbcrO7t7+wfVw6OeCGOOSReHLOQDFwnCaEC6kkpGBhEnyHcZ6buz68zvPxAuaBjcy3lEHB9NAupRjKSSRlUrGeaX2HziOomhX5r1VvPqIidWw1iSVt1Ihz6SU4xYcpuOkkaajqq1Ig2LNCzS0NSNHDWwQmdUXQzHIY59EkjMkBC2aUTSSRCXFDOSVoaxIBHCMzQhtqIB8olwkny5FJ4pZQy9kKsTSJir3ycS5Asx912VzPYUv71M/MuzY+lZTkKDKJYkwMuHvJhBGcKsLjimnGDJ5oogzKnaFeIp4ghLVWpFlVD8FP5PenXdbOrNu2atba3qKIMTcArOgQlaoA1uQAd0AQaP4Bm8gjftSXvR3rXFMlrSVjPH4Ae0zy9Nz6CM</latexit> <latexit sha1_base64="CN/x1zI9SF064ojut25WYcKBezg=">AAAB+HicbVDLSsNAFL3xWeujUZduBovgqiRSsMuCGxcuKtgHtCFMppN26GQSZiZCDfkSNy4UceunuPNvnLRZaOuBgcM593LPnCDhTGnH+bY2Nre2d3Yre9X9g8Ojmn180lNxKgntkpjHchBgRTkTtKuZ5nSQSIqjgNN+MLsp/P4jlYrF4kHPE+pFeCJYyAjWRvLt2ijCekowz+5yP3Nz3647DWcBtE7cktShRMe3v0bjmKQRFZpwrNTQdRLtZVhqRjjNq6NU0QSTGZ7QoaECR1R52SJ4ji6MMkZhLM0TGi3U3xsZjpSaR4GZLGKqVa8Q//OGqQ5bXsZEkmoqyPJQmHKkY1S0gMZMUqL53BBMJDNZEZliiYk2XVVNCe7ql9dJ76rhNhvN+2a93SrrqMAZnMMluHANbbiFDnSBQArP8Apv1pP1Yr1bH8vRDavcOYU/sD5/APJjk0A=</latexit> <latexit sha1_base64="LbZTPyHPkoBlIZO+cqz4WbvTbMg=">AAAB+HicbVDLSsNAFL2pr1ofrbp0M1gEVyUpBV0W3bhwUcE+oA1hMp20QyeTMDMRasiXuHGhiFs/xZ1/46TNQlsPDBzOuZd75vgxZ0rb9rdV2tjc2t4p71b29g8Oq7Wj456KEklol0Q8kgMfK8qZoF3NNKeDWFIc+pz2/dlN7vcfqVQsEg96HlM3xBPBAkawNpJXq45CrKcE8/Qu89Jm5tXqdsNeAK0TpyB1KNDxal+jcUSSkApNOFZq6NixdlMsNSOcZpVRomiMyQxP6NBQgUOq3HQRPEPnRhmjIJLmCY0W6u+NFIdKzUPfTOYx1aqXi/95w0QHV27KRJxoKsjyUJBwpCOUt4DGTFKi+dwQTCQzWRGZYomJNl1VTAnO6pfXSa/ZcFqN1n2r3r4u6ijDKZzBBThwCW24hQ50gUACz/AKb9aT9WK9Wx/L0ZJV7JzAH1ifP/bqk0s=</latexit> <latexit sha1_base64="+D8CGvOIy7CmYrNk/7o4p++HCN4=">AAAB+HicbVDLSsNAFL2pr1ofrbp0M1gEVyXRgl0W3LhwUcE+oA1hMp20QyeTMDMRasiXuHGhiFs/xZ1/46TNQlsPDBzOuZd75vgxZ0rb9rdV2tjc2t4p71b29g8Oq7Wj456KEklol0Q8kgMfK8qZoF3NNKeDWFIc+pz2/dlN7vcfqVQsEg96HlM3xBPBAkawNpJXq45CrKcE8/Qu89KrzKvV7Ya9AFonTkHqUKDj1b5G44gkIRWacKzU0LFj7aZYakY4zSqjRNEYkxme0KGhAodUuekieIbOjTJGQSTNExot1N8bKQ6Vmoe+mcxjqlUvF//zhokOWm7KRJxoKsjyUJBwpCOUt4DGTFKi+dwQTCQzWRGZYomJNl1VTAnO6pfXSe+y4TQbzftmvd0q6ijDKZzBBThwDW24hQ50gUACz/AKb9aT9WK9Wx/L0ZJV7JzAH1ifP/Vtk0I=</latexit> <latexit sha1_base64="JUwT8/gTyru9SrG2drc6lK20z7s=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74STu7nfeeLaiEQ94jTlQUxHSkSCUbSSTwc5zgbVmlt3FyDrxCtIDQq0BtWv/jBhWcwVMkmN6XluikFONQom+azSzwxPKZvQEe9ZqmjMTZAvjp2RC6sMSZRoWwrJQv09kdPYmGkc2s6Y4tisenPxP6+XYXQT5EKlGXLFlouiTBJMyPxzMhSaM5RTSyjTwt5K2JhqytDmU7EheKsvr5P2Vd1r1BsPjVrztoijDGdwDpfgwTU04R5a4AMDAc/wCm+Ocl6cd+dj2VpyiplT+APn8wcVMo7f</latexit> <latexit sha1_base64="ImEBLyjR9ZUOer4a7zC7GMRMeS4=">AAAB83icdVDLSgMxFM3UV62vqks3wSK4GmZ0Suuu4MZlBfuAzlAyaaYNzSQhyQhl6G+4caGIW3/GnX9jOq2gogcuHM65l3vviSWj2njeh1NaW9/Y3CpvV3Z29/YPqodHXS0yhUkHCyZUP0aaMMpJx1DDSF8qgtKYkV48vV74vXuiNBX8zswkiVI05jShGBkrhSGRmjLBh7k/H1Zrnlv3gsvGFSxIUG8uyWXdg77rFaiBFdrD6ns4EjhLCTeYIa0HvidNlCNlKGZkXgkzTSTCUzQmA0s5SomO8uLmOTyzyggmQtniBhbq94kcpVrP0th2pshM9G9vIf7lDTKTNKOccpkZwvFyUZIxaARcBABHVBFs2MwShBW1t0I8QQphY2Oq2BC+PoX/k+6F6wducBvUWs1VHGVwAk7BOfBBA7TADWiDDsBAggfwBJ6dzHl0XpzXZWvJWc0cgx9w3j4BqNmSFA==</latexit> <latexit sha1_base64="LUQ4QWpCuaFVM1SHCTj+OB02KXI=">AAAB83icdVDLSgMxFM3UV62vqks3wSK4GqbtlNZdwY3LCvYBnaFk0kwbmklCkhHK0N9w40IRt/6MO//GdFpBRQ9cOJxzL/feE0lGtfG8D6ewsbm1vVPcLe3tHxwelY9PelqkCpMuFkyoQYQ0YZSTrqGGkYFUBCURI/1odr30+/dEaSr4nZlLEiZowmlMMTJWCgIiNWWCj7LaYlSueG7D8+vNK5gTv9FakXrDg1XXy1EBa3RG5fdgLHCaEG4wQ1oPq540YYaUoZiRRSlINZEIz9CEDC3lKCE6zPKbF/DCKmMYC2WLG5ir3ycylGg9TyLbmSAz1b+9pfiXN0xN3AozymVqCMerRXHKoBFwGQAcU0WwYXNLEFbU3grxFCmEjY2pZEP4+hT+T3o1t+q7/q1fabfWcRTBGTgHl6AKmqANbkAHdAEGEjyAJ/DspM6j8+K8rloLznrmFPyA8/YJql6SFQ==</latexit> <latexit sha1_base64="uMxm3PbilOYL+k0Jk7kl6JXG+Os=">AAAB83icdVDLSgMxFM34rPVVdekmWARXw4yd0roruHFZwT6gM5RMmmlDM0lIMkIZ+htuXCji1p9x59+YTiuo6IELh3Pu5d57YsmoNp734aytb2xubZd2yrt7+weHlaPjrhaZwqSDBROqHyNNGOWkY6hhpC8VQWnMSC+eXi/83j1Rmgp+Z2aSRCkac5pQjIyVwpBITZngw7w2H1aqnlv3glrjChYkqDeXpFb3oO96Bapghfaw8h6OBM5Swg1mSOuB70kT5UgZihmZl8NME4nwFI3JwFKOUqKjvLh5Ds+tMoKJULa4gYX6fSJHqdazNLadKTIT/dtbiH95g8wkzSinXGaGcLxclGQMGgEXAcARVQQbNrMEYUXtrRBPkELY2JjKNoSvT+H/pHvp+oEb3AbVVnMVRwmcgjNwAXzQAC1wA9qgAzCQ4AE8gWcncx6dF+d12brmrGZOwA84b5+r45IW</latexit> Multiple choice learning. Multiple choice learning (MCL) [12, 13] is an ensemble method where the objective is to minimize an oracle loss, making at least one ensemble member predict the correct answer. By making the most accurate model optimize the loss, MCL encourages the model to produce multiple outputs of high quality. Even though several optimization algorithms [8, 12] have been proposed for MCL objectives, it is challenging to employ these for learning deep neural networks due to training complexity. To address this problem, Lee et al. [21] proposed a stochastic gradient descent based algorithm. Recently, several methods [19, 29, 43] have been proposed to further improve MCL by tackling the issue of overconfidence problem of neural networks. While most prior works on MCL have focused on supervised learning, our method applies the MCL algorithm to model-based RL. Dynamics generalization and adaptation in model-based RL. Prior dynamics generalization methods have aimed to either encode inductive biases into the architecture [36] or to learn contextual information that captures the local dynamics [20]. Notably, Lee et al. [20] introduced a context encoder that captures dynamics-specific information of environments, and improved the generalization ability by providing a context latent vector as additional inputs. Our method further improves this method by combining multiple choice learning and context learning. For dynamics adaptation, several meta-learning based methods have been studied [30, 31, 35]. Recently, Nagabandi et al. [30] proposed a model-based meta-RL method that adapts to recent experiences either by updating model parameters via a small number of gradient updates [11] or by updating hidden representations of a recurrent model [10]. Our method differs from this method, in that we do not fine-tune the model parameters to adapt to new environments at evaluation time. 3 Problem setup We consider the standard RL framework where an agent optimizes a specified reward function through interacting with an environment. Formally, we formulate our problem as a discrete-time Markov decision process (MDP) [41], which is defined as a tuple (S,A, p, r, γ, ρ0). Here, S is the state space, A is the action space, p (s′|s, a) is the transition dynamics, r (s, a) is the reward function, ρ0 is the initial state distribution, and γ ∈ [0, 1) is the discount factor. The goal of RL is to obtain a policy, mapping from states to actions, that maximizes the expected return defined as the total accumulated reward. We tackle this problem in the context of model-based RL by learning a forward dynamics model f , which approximates the transition dynamics p (s′|s, a). Then, dynamics model f is used to provide training data for a policy or predict the future consequences of actions for planning. In order to address the problem of generalization, we further consider the distribution of MDPs, where the transition dynamics pc (s′|s, a) varies according to a context c. For instance, a robot agent’s transition dynamics may change when some of its parts malfunction due to unexpected damages. Our goal is to learn a generalizable forward dynamics model that is robust to such dynamics changes, i.e., approximating the multi-modal distribution of transition dynamics p (s′|s, a) = ∫ c p (c) pc (s ′|s, a). Specifically, given a set of training environments with contexts sampled from ptrain(c), we aim to learn a forward dynamics model that can produce accurate predictions for both training environments and test environments with unseen (but related) contexts sampled from ptest(c). Algorithm 1 Trajectory-wise MCL (T-MCL). Initialize parameters of backbone network θ, prediction heads {θheadh }Hh=1, context encoder φ. Initialize dataset B ← ∅. for each iteration do Sample c ∼ pseen (c). // ENVIRONMENT INTERACTION for t = 1 to TaskHorizon do Get context latent vector zt = g ( τ Pt,K ;φ ) and select the best prediction head h∗ from (3) Collect {(st, at, st+1, rt, τ Pt,K)} from environment with transition dynamics pc using h∗. Update B ← B ∪ {(st, at, st+1, rt, τ Pt,K)}. end for Initialize Ltot ← 0 // DYNAMICS AND CONTEXT LEARNING Sample {τPtj ,K , τFtj ,M}Bj=1 ∼ B for j = 1 to B do for h = 1 to H do Compute the loss of the h-th prediction head: Lhj ← − 1 M tj+M−1∑ i=tj log f ( si+1 | b(si, ai; θ), g(τ Pi,K ;φ); θheadh ) end for Find h∗ = argminh∈[H] L h j and update Ltot ← Ltot + Lh ∗ j end for Update θ, φ, {θheadh }Hh=1 ← ∇θ,φ,{θheadh }Hh=1Ltot end for 4 Trajectory-wise multiple choice learning In this section, we propose a trajectory-wise multiple choice learning (T-MCL) that learns a multiheaded dynamics model for dynamics generalization. We first present a trajectory-wise oracle loss for making each prediction head specialize in different environments, and then introduce a contextconditional prediction head to further improve generalization. Finally, we propose an adaptive planning method that generates actions by planning under the most accurate prediction head over a recent experience for planning. 4.1 Trajectory-wise oracle loss for multi-headed dynamics model To approximate the multi-modal distribution of transition dynamics, we introduce a multi-headed dynamics model {f (st+1|b(st, at; θ); θheadh )}Hh=1 that consists of a backbone network b parameterized by θ and H prediction heads parameterized by {θheadh }Hh=1 (see Figure 2a). To make each prediction head specialize in different environments, we propose a trajectory-wise oracle defined as follows: LT-MCL = Eτ Ft,M∼B [ min h∈[H] − 1 M t+M−1∑ i=t log f ( si+1 | b(si, ai; θ); θheadh )] , (1) where [H] is the set {1, · · · , H}, τ Ft,M = (st, at, · · · , st+M−1, at+M−1, st+M ) denotes a trajectory segment of size M , and B = {τ Ft,M} is the training dataset. The proposed loss is designed to only update the most accurate prediction head over each trajectory segment for specialization (see Figure 2b). By considering the accumulated prediction error over trajectory segments, the proposed oracle loss can assign trajectories from different transition dynamics to different transition heads more distinctively (see Figure 4 for supporting experimental results). Namely, our method clusters environments in an unsupervised manner. We also remark that the shared backbone network learns common features across all environments, which provides several advantages, such as improving sample-efficiency and reducing computational costs. 4.2 Context-conditional multi-headed dynamics model To further enable the dynamics model to perform online adaptation to unseen environments, we introduce a context encoder g parameterized by φ, which produces a latent vector g ( τ Pt,K ;φ ) givenK past transitions (st−K , at−K , · · · , st−1, at−1). This context encoder operates under the assumption that the true context of the underlying MDP can be captured from recent experiences [20, 30, 34, 48]. Using this context encoder, we propose to learn a context-conditional multi-headed dynamics model optimized by minimizing the following oracle loss: LT-MCLcontext = E(τ Ft,M ,τ Pt,K)∼B [ min h∈[H] − 1 M t+M−1∑ i=t log f ( si+1 | b(si, ai; θ), g(τ Pi,K ;φ); θheadh )] . (2) We remark that the dynamics generalization of T-MCL can be enhanced by incorporating the contextual information into the dynamics model for enabling its online adaptation. To extract more meaningful contextual information, we also utilize various auxiliary prediction losses proposed in Lee et al. [20] (see the supplementary material for more details). 4.3 Adaptive planning Once a multi-headed dynamics model is learned, it can be used for selecting actions by planning. Since the performance of planning depends on the quality of predictions, it is important to select the prediction head specialized in the current environment for planning. To this end, following the idea of Narendra & Balakrishnan [32], we propose an adaptive planning method that selects the most accurate prediction head over a recent experience (see Figure 2c). Formally, given N past transitions, we select the prediction head h∗ as follows: argmin h∈[H] t−2∑ i=t−N ` ( si+1, f ( b(si, ai), g(τ P i,K ;φ); θ head h )) , (3) where ` is the mean squared error function. One can note that this planning method corresponds to finding the nearest cluster to the current environment. We empirically show that this adaptive planning significantly improves the performance by selecting the prediction head specialized in a specific environment (see Figure 6b for supporting experimental results). 5 Experiments In this section, we designed our experiments to answer the following questions: • How does our method compare to existing model-based RL methods and state-of-the-art modelfree meta-RL method (see Figure 3)? • Can prediction heads be specialized for a certain subset of training environments with similar dynamics (see Figure 4 and Figure 5)? • Is the multi-headed architecture useful for dynamics generalization of other model-based RL methods (see Figure 6a)? • Does adaptive planning improve generalization performance (see Figure 6b)? • Can T-MCL extract meaningful contextual information from complex environments (see Figure 6c and Figure 6d)? 5.1 Setups Environments. We demonstrate the effectiveness of our proposed method on classic control problems (i.e., CartPoleSwingUp and Pendulum) from OpenAI Gym [3] and simulated robotic continuous tasks (i.e., Hopper, SlimHumanoid, HalfCheetah, and CrippledAnt) from MuJoCo physics engine [44]. To evaluate the generalization performance, we designed environments to follow a multi-modal distribution by changing the environment parameters (e.g., length and mass) similar to Packer et al. [33] and Zhou et al. [48]. We use the two predefined discrete set of environment parameters for training and test environments, where parameters for test environments are outside the range of training parameters. Then, we learn a dynamics model on environments whose transition dynamics are characterized by the environment parameters randomly sampled before the episode starts. For evaluation, we report the performance of a trained dynamics model on unseen environments whose environment parameters are randomly sampled from the test parameter set. Similar to prior works [4, 30, 46], we assume that the reward function of environments is known, i.e., ground-truth rewards at the predicted states are available for planning. For all our experiments, we report the mean and standard deviation across three runs. We provide more details in the supplementary material. Planning. We use a model predictive control (MPC) [27] to select actions based on the learned dynamics model. Specifically, we use the cross entropy method (CEM) [6] to optimize action sequences by iteratively re-sampling action sequences near the best performing action sequences from the last iteration. Implementation details of T-MCL. For all experiments, we use an ensemble of multi-headed dynamics models that are independently optimized with the trajectory-wise oracle loss. We reduce modeling errors by training multiple dynamics models [4]. To construct a trajectory segment τ Ft,M in (1), we use M transitions randomly sampled from a trajectory instead of consecutive transitions (st, at, · · · , st+M ). We empirically found that this stabilizes the training, by breaking the temporal correlations of the training data. We also remark that the same hyperparameters are used for all experiments except Pendulum which has a short task horizon. We provide more details in the supplementary material. Baselines. To evaluate the performance of our method, we consider following model-based and model-free RL methods: • Probabilistic ensemble dynamics model (PETS) [4]: an ensemble of probabilistic dynamics models that captures the uncertainty in modeling and planning. PETS employs ensembles of single-headed dynamics models optimized to cover all training environments, while T-MCL employs ensembles of multi-headed dynamics models specialized to a subset of environments. 95.3 4.7 0.0 0.0 100.0 0.0 0.0 0.2 99.8 0.0 0.0 100.0 0.50 1.50 2.50 Head 1 Head 2 Head 3 0.25 Mass (a) CartPoleSwingUp 100.0 0.0 0.0 0.0 0.0 100.0 10.1 89.9 0.0 0.1 99.9 0.0 0.75 1.0 1.25 Head 1 Head 2 Head 3 0.50 Length (b) Pendulum 100.0 0.0 0.0 73.6 26.4 0.0 2.0 2.1 95.9 0.7 86.9 12.4 0.50 1.50 2.50 Head 1 Head 2 Head 3 0.25 Mass (c) HalfCheetah (a) Multiple choice learning (MCL) (b) Trajectory-wise MCL (T-MCL) (c) Generalization performance 5.2 Comparative evaluation on control tasks Figure 3 shows the generalization performances of our method and baseline methods on unseen environments (see the supplementary material for training curve plots). Our method significantly outperforms all model-based RL baselines in all environments. In particular, T-MCL achieves the average return of 19280.1 on HalfCheetah environments while that of PETS is 2223.9. This result demonstrates that our method is more effective for dynamics generalization, compared to the independent ensemble of dynamics models. On the other hand, model-based meta-RL methods (ReBAL and GrBAL) do not exhibit significant performance gain over PETS, which shows the difficulty of adapting a dynamics model to unseen environments via meta-learning. We also remark that CaDM does not consistently improve over PETS, due to the difficulty in context learning with a discrete number of training environments. We observed that T-MCL sometimes reaches the performance of PEARL or even outperforms it in terms of both sample-efficiency and asymptotic performance. This result demonstrates the effectiveness of our method for dynamics generalization, especially given that PEARL adapts to test environments by collecting trajectories at evaluation time. PETS Multi-Headed PETS Av er ag e R et ur n 0 1000 2000 3000 4000 5000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (a) Multi-head architecture T-MCL T-MCL (Non-adaptive) Av er ag e R et ur n 0 5,000 10,000 15,000 20,000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (b) Adaptive planning PETS T-MCL T-MCL (No Context) Av er ag e R et ur n 0 5,000 10,000 15,000 20,000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (c) Context learning Mass = 0.25 Mass = 0.50 Mass = 1.50 Mass = 2.50 (d) t-SNE visualization Figure 6: (a) Generalization performance of PETS and Multi-Headed PETS on unseen HalfCheetah environments. (b) We compare the generalization performance of adaptive planning to non-adaptive planning on unseen HalfCheetah environments. (c) Generalization performance of trained dynamics models on unseen HalfCheetah environments. One can observe that T-MCL still outperforms PETS without context learning, but this results in a significant performance drop. (d) t-SNE visualization of hidden features of context-conditional multi-headed dynamics model on HalfCheetah environments. 5.3 Analysis Specialization. To investigate the ability of our method to learn specialized prediction heads, we visualize how training trajectories are assigned to each head in Figure 4. One can observe that trajectories are distinctively assigned to prediction heads, while trajectories from environments with similar transition dynamics are assigned to the same prediction head. For example, we discover that the transition dynamics of Pendulum with length 1.0 and 1.25 are more similar to each other than Pendulum with other lengths (see the supplementary material for supporting figures), which implies that our method can cluster environments in an unsupervised manner. Effects of trajectory-wise loss. To further investigate the effectiveness of trajectory-wise oracle loss, we compare our method to MCL, where we consider only a single transition for selecting the model to optimize, i.e., M = 1 in (1). Figure 5a and Figure 5b show that training trajectories are more distinctively assigned to each head when we use T-MCL, which implies that trajectory-wise loss is indeed important for learning specialized prediction heads. Also, as shown in Figure 5c, this leads to superior generalization performance over the dynamics model trained with MCL, showing that learning specialized prediction heads improves the generalization performance. Effects of multi-headed dynamics model. We also analyze the isolated effect of employing multiheaded architecture on the generalization performance. To this end, we train the multi-headed version of PETS, i.e., ensemble of multi-headed dynamics models without trajectory-wise oracle loss, context learning, and adaptive planning. Figure 6a shows that multi-headed PETS does not improve the performance of vanilla PETS on HalfCheetah environments, which demonstrates the importance of training with trajectory-wise oracle loss and adaptively selecting the most accurate prediction head for achieving superior generalization performance of our method. Effects of adaptive planning. We investigate the importance of selecting the specialized prediction head adaptively. Specifically, we compare the performance of employing the proposed adaptive planning method to the performance of employing non-adaptive planning, i.e., planning with the average predictions of prediction heads. As shown in Figure 6b, the gain due to adaptive planning is significant, which confirms that proposed adaptive planning is important. Effects of context learning. We examine our choice of integrating context learning by comparing the performance of a context-conditional multi-headed dynamics model to the performance of a multi-headed dynamics model. As shown in Figure 6c, removing context learning scheme from the T-MCL results in steep performance degradation, which demonstrates the importance of incorporating contextual information. However, we remark that the T-MCL still outperforms PETS without context learning scheme. Also, we visualize the hidden features of a context-conditional multi-headed dynamics model on HalfCheetah environments using t-SNE [26] in Figure 6d. One can observe that features from the environments with different transition dynamics are separated in the embedding space, which implies that our method indeed learns meaningful contextual information. Effects of hyperparameters. Finally, we investigate how hyperparameters affect the performance of T-MCL. Specifically, we consider three hyperparameters, i.e., H ∈ {2, 3, 4, 5, 8} for the number of prediction heads in (2), M ∈ {1, 5, 10, 20, 30} for the horizon of trajectory-wise oracle loss in (2), and N ∈ {1, 5, 10, 20, 30} for the horizon of adaptive planning in (3). Figure 7a shows that H = 3 achieves the best performance because three prediction heads are enough to capture the multi-modality of the training environments in our setting. When H > 3, the performance decreases because trajectories from similar dynamics are split into multiple heads. Figure 7b and Figure 7c show that our method is robust to the horizons M,N , and considering more transitions can further improve the performance. We provide results for all environments in the supplementary material. 6 Conclusion In this work, we present trajectory-wise multiple choice learning, a new model-based RL algorithm that learns a multi-headed dynamics model for dynamics generalization. Our method consists of three key ingredients: (a) trajectory-wise oracle loss for multi-headed dynamics model, (b) contextconditional multi-headed dynamics model, and (c) adaptive planning. We show that our method can capture the multi-modal nature of environments in an unsupervised manner, and outperform existing model-based RL methods. Overall, we believe our approach would further strengthen the understanding of dynamics generalization and could be useful to other relevant topics such as model-based policy optimization methods [15, 16]. Broader Impact While deep reinforcement learning (RL) has been successful in a range of challenging domains, it still suffers from a lack of generalization ability to unexpected changes in surrounding environmental factors [20, 30]. This failure of autonomous agents to generalize across diverse environments is one of the major reasoning behind the objection to real-world deployment of RL agents. To tackle this problem, in this paper, we focus on developing more robust and generalizable RL algorithm, which could improve the applicability of deep RL to various real-world applications, such as robotics manipulation [17] and package delivery [2]. Such advances in the robustness of RL algorithm could contribute to improved productivity of society via the safe and efficient utilization of autonomous agents in a diverse range of industries. Unfortunately, however, we could also foresee the negative long-term consequences of deploying autonomous systems in the real-world. For example, autonomous agents could be abused by specifying harmful objectives such as autonomous weapons. While such malicious usage of autonomous agents was available long before the advent of RL algorithms, developing an RL algorithm for dynamics generalization may accelerate the real-world deployment of such malicious robots, e.g., autonomous drones loaded with explosives, by making them more robust to changing dynamics or defense systems. We would like to recommend the researchers to recognize this potential misuse as we further improve RL systems. Acknowledgments and Disclosure of Funding We thank Junsu Kim, Seunghyun Lee, Jongjin Park, Sihyun Yu, and our anonymous reviewers for feedback and discussions. This research is supported in part by ONR PECASE N000141612723, Tencent, Berkeley Deep Drive, Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)), and Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF2018R1A5A1059921).
1. What is the main contribution of the paper in reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its application of Multiple-Choice Learning? 3. What are the weaknesses of the paper regarding its limitations and assumptions? 4. How does the reviewer assess the significance of the empirical work presented in the paper? 5. Are there any questions or concerns raised by the reviewer regarding the paper's content or methodology?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper addresses the problem of a reinforcement learning agent that has been trained in a variety of related environments and now faces a novel (still related) environment. The goal is to help the agent quickly adapt to the new dynamics. The main idea here is to leverage Multiple-Choice Learning, in which an ensemble is trained with the goal that at least one member of the ensemble should make the correct prediction (this is in contrast with other ensemble frameworks in which predictions are made by averaging predictions or by majority vote). In this case MCL is implemented by adding multiple prediction heads to a dynamics model. The heads are encouraged to specialize in different dynamics by training only the most accurate head in any given example. Experiments demonstrate in simulated robotics domains that T-MCL can adapt to unseen dynamics more readily than existing approaches and ablation studies show the importance of various components of the architecture. Strengths - The problem of developing agents that are robust to small changes in dynamics is important for making RL approaches more applicable in messy real-world settings. - Presents a novel (as far as I know) application of MCL to model-based RL. - Empirical work is strong: in addition to positive "cook-off" results, investigatory experiments support hypotheses about why the approach is effective and several ablation studies investigate the importance of individual components and their integration. Weaknesses A weakness of the paper is that I didn't find a clear discussion of drawbacks and limitations of the approach. Under what circumstances is this approach likely to fail? What implicit assumptions are made by the design decisions? One that occurs to me is that it doesn't seem like this approach would improve transfer/adaptation to a single change in dynamics and instead relies on the agent being exposed to a variety of MDPs in order to interpolate between them. The paper would be strengthened if it, in addition to demonstrating the benefits of the approach, clearly identified problems left open and barriers left to surmount.
NIPS
Title Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning Abstract Model-based reinforcement learning (RL) has shown great potential in various control tasks in terms of both sample-efficiency and final performance. However, learning a generalizable dynamics model robust to changes in dynamics remains a challenge since the target transition dynamics follow a multi-modal distribution. In this paper, we present a new model-based RL algorithm, coined trajectory-wise multiple choice learning, that learns a multi-headed dynamics model for dynamics generalization. The main idea is updating the most accurate prediction head to specialize each head in certain environments with similar dynamics, i.e., clustering environments. Moreover, we incorporate context learning, which encodes dynamicsspecific information from past experiences into the context latent vector, enabling the model to perform online adaptation to unseen environments. Finally, to utilize the specialized prediction heads more effectively, we propose an adaptive planning method, which selects the most accurate prediction head over a recent experience. Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods. Source code and videos are available at https://sites.google.com/view/trajectory-mcl. 1 Introduction Deep reinforcement learning (RL) has exhibited wide success in solving sequential decision-making problems [23, 39, 45]. Early successful deep RL approaches had been mostly model-free, which do not require an explicit model of the environment, but instead directly learn a policy [25, 28, 38]. However, despite the strong asymptotic performance, the applications of model-free RL have largely been limited to simulated domains due to its high sample complexity. For this reason, model-based RL has been gaining considerable attention as a sample-efficient alternative, with an eye towards robotics and other physics domains. The increased sample-efficiency of model-based RL algorithms is obtained by exploiting the structure of the problem: first the agent learns a predictive model of the environment, and then plans ahead with the learned model [1, 37, 42]. Recently, substantial progress has been made on the sample-efficiency of model-based RL algorithms [5, 7, 22, 23, 24]. However, it has been evidenced that model-based RL algorithms are not robust to changes in the dynamics [20, 30], i.e., dynamics models fail to provide accurate predictions as the transition dynamics of environments change. This makes model-based RL algorithms unreliable to be deployed into real-world environments where partially unspecified dynamics are common; for instance, a deployed robot might not know a priori various features of the terrain it has to navigate. ∗Equal Contribution. Correspondence to {[email protected], [email protected]} 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. ar X iv :2 01 0. 13 30 3v 1 [ cs .L G ] 2 6 O ct 2 As a motivating example, we visualize the next states obtained by crippling one of the legs of an ant robot (see Figure 1a). Figure 1b shows that the target transition dynamics follow a multimodal distribution, where each mode corresponds to each leg of a robot, even though the original environment has deterministic transition dynamics. This implies that a model-based RL algorithm that can approximate the multi-modal distribution is required to develop a reliable and robust agent against changes in the dynamics. Several algorithms have been proposed to tackle this problem, e.g., learning contextual information to capture local dynamics [20], fine-tuning model parameters for fast adaptation [30]. These algorithms, however, are limited in that they do not explicitly learn dynamics models that can approximate the multi-modal distribution of transition dynamics. Contribution. In this paper, we present a new model-based RL algorithm, coined trajectory-wise multiple choice learning (T-MCL), that can approximate the multi-modal distribution of transition dynamics in an unsupervised manner. To this end, we introduce a novel loss function, trajectory-wise oracle loss, for learning a multi-headed dynamics model where each prediction head specializes in different environments (see Figure 2a). By updating the most accurate prediction head over a trajectory segment (see Figure 2b), we discover that specialized prediction heads emerge automatically. Namely, our method can effectively cluster environments without any prior knowledge of environments. To further enable the model to perform online adaptation to unseen environments, we also incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector and provides it as an additional input to prediction heads (see Figure 2a). Finally, to utilize the specialized prediction heads more effectively, we propose adaptive planning that selects actions using the most accurate prediction head over a recent experience, which can be interpreted as finding the nearest cluster to the current environment (see Figure 2c). We demonstrate the effectiveness of T-MCL on various control tasks from OpenAI Gym [3]. For evaluation, we measure the generalization performance of model-based RL agents on unseen (yet related) environments with different transition dynamics. In our experiments, T-MCL exhibits superior generalization performance compared to existing model-based RL methods [4, 20, 30]. For example, compared to CaDM [20], a state-of-the-art model-based RL method for dynamics generalization, our method obtains 3.5x higher average return on the CrippledAnt environment. 2 Related work Model-based reinforcement learning. By learning a forward dynamics model that approximates the transition dynamics of environments, model-based RL attains a superior sample-efficiency. Such a learned dynamics model can be used as a simulator for model-free RL methods [16, 18, 40], providing a prior or additional features to a policy [9, 47], or planning ahead to select actions by predicting the future consequences of actions [1, 22, 42]. A major challenge in model-based RL is to learn accurate dynamics models that can provide correct future predictions. To this end, numerous methods thus have been proposed, including ensembles [4] and latent dynamics models [14, 15, 37]. While these methods have made significant progress even in complex domains [15, 37], dynamics models still struggle to provide accurate predictions on unseen environments [20, 30]. <latexit sha1_base64="JUwT8/gTyru9SrG2drc6lK20z7s=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74STu7nfeeLaiEQ94jTlQUxHSkSCUbSSTwc5zgbVmlt3FyDrxCtIDQq0BtWv/jBhWcwVMkmN6XluikFONQom+azSzwxPKZvQEe9ZqmjMTZAvjp2RC6sMSZRoWwrJQv09kdPYmGkc2s6Y4tisenPxP6+XYXQT5EKlGXLFlouiTBJMyPxzMhSaM5RTSyjTwt5K2JhqytDmU7EheKsvr5P2Vd1r1BsPjVrztoijDGdwDpfgwTU04R5a4AMDAc/wCm+Ocl6cd+dj2VpyiplT+APn8wcVMo7f</latexit> <latexit sha1_base64="CN/x1zI9SF064ojut25WYcKBezg=">AAAB+HicbVDLSsNAFL3xWeujUZduBovgqiRSsMuCGxcuKtgHtCFMppN26GQSZiZCDfkSNy4UceunuPNvnLRZaOuBgcM593LPnCDhTGnH+bY2Nre2d3Yre9X9g8Ojmn180lNxKgntkpjHchBgRTkTtKuZ5nSQSIqjgNN+MLsp/P4jlYrF4kHPE+pFeCJYyAjWRvLt2ijCekowz+5yP3Nz3647DWcBtE7cktShRMe3v0bjmKQRFZpwrNTQdRLtZVhqRjjNq6NU0QSTGZ7QoaECR1R52SJ4ji6MMkZhLM0TGi3U3xsZjpSaR4GZLGKqVa8Q//OGqQ5bXsZEkmoqyPJQmHKkY1S0gMZMUqL53BBMJDNZEZliiYk2XVVNCe7ql9dJ76rhNhvN+2a93SrrqMAZnMMluHANbbiFDnSBQArP8Apv1pP1Yr1bH8vRDavcOYU/sD5/APJjk0A=</latexit> <latexit sha1_base64="eJteVk3jzTJdLreT95xXMrghIyU=">AAACIXicdVBLS8NAGNzUV62vqEcvi0XwICWJKdZb0YsHDxXsA9pQNttNu3TzYHcjlJC/4sW/4sWDIr2Jf8ZN2oCKDiwMM/PtfjtuxKiQhvGhlVZW19Y3ypuVre2d3T19/6Ajwphj0sYhC3nPRYIwGpC2pJKRXsQJ8l1Guu70OvO7D4QLGgb3chYRx0fjgHoUI6mkod5IBvklfT52ncSo1Rt1q26d5eTcthfEMi/TgY/kBCOW3KbDxErToV4t0rBIwyINzZqRowqWaA31+WAU4tgngcQMCdE3jUg6CeKSYkbSyiAWJEJ4isakr2iAfCKcJF8uhSdKGUEv5OoEEubq94kE+ULMfFclsz3Fby8T//L6sfQaTkKDKJYkwIuHvJhBGcKsLjiinGDJZoogzKnaFeIJ4ghLVWpFlVD8FP5POlbNtGv2nV1tXi3rKIMjcAxOgQkuQBPcgBZoAwwewTN4BW/ak/aivWvzRbSkLWcOwQ9on1+SJ6DB</latexit> <latexit sha1_base64="a7txUJr+FjoRJKxtux2ljCScSjE=">AAACIXicdVBLS8NAGNzUV62vqkcvi0XwICFpK423ghcPHirYB6ShbLabdunmwe5GKCF/xYt/xYsHRXoT/4ybtAEVHVgYZubb/XbciFEhDeNDK62tb2xulbcrO7t7+wfVw6OeCGOOSReHLOQDFwnCaEC6kkpGBhEnyHcZ6buz68zvPxAuaBjcy3lEHB9NAupRjKSSRlUrGeaX2HziOomhX5r1VvPqIidWw1iSVt1Ihz6SU4xYcpuOkkaajqq1Ig2LNCzS0NSNHDWwQmdUXQzHIY59EkjMkBC2aUTSSRCXFDOSVoaxIBHCMzQhtqIB8olwkny5FJ4pZQy9kKsTSJir3ycS5Asx912VzPYUv71M/MuzY+lZTkKDKJYkwMuHvJhBGcKsLjimnGDJ5oogzKnaFeIp4ghLVWpFlVD8FP5PenXdbOrNu2atba3qKIMTcArOgQlaoA1uQAd0AQaP4Bm8gjftSXvR3rXFMlrSVjPH4Ae0zy9Nz6CM</latexit> <latexit sha1_base64="CN/x1zI9SF064ojut25WYcKBezg=">AAAB+HicbVDLSsNAFL3xWeujUZduBovgqiRSsMuCGxcuKtgHtCFMppN26GQSZiZCDfkSNy4UceunuPNvnLRZaOuBgcM593LPnCDhTGnH+bY2Nre2d3Yre9X9g8Ojmn180lNxKgntkpjHchBgRTkTtKuZ5nSQSIqjgNN+MLsp/P4jlYrF4kHPE+pFeCJYyAjWRvLt2ijCekowz+5yP3Nz3647DWcBtE7cktShRMe3v0bjmKQRFZpwrNTQdRLtZVhqRjjNq6NU0QSTGZ7QoaECR1R52SJ4ji6MMkZhLM0TGi3U3xsZjpSaR4GZLGKqVa8Q//OGqQ5bXsZEkmoqyPJQmHKkY1S0gMZMUqL53BBMJDNZEZliiYk2XVVNCe7ql9dJ76rhNhvN+2a93SrrqMAZnMMluHANbbiFDnSBQArP8Apv1pP1Yr1bH8vRDavcOYU/sD5/APJjk0A=</latexit> <latexit sha1_base64="LbZTPyHPkoBlIZO+cqz4WbvTbMg=">AAAB+HicbVDLSsNAFL2pr1ofrbp0M1gEVyUpBV0W3bhwUcE+oA1hMp20QyeTMDMRasiXuHGhiFs/xZ1/46TNQlsPDBzOuZd75vgxZ0rb9rdV2tjc2t4p71b29g8Oq7Wj456KEklol0Q8kgMfK8qZoF3NNKeDWFIc+pz2/dlN7vcfqVQsEg96HlM3xBPBAkawNpJXq45CrKcE8/Qu89Jm5tXqdsNeAK0TpyB1KNDxal+jcUSSkApNOFZq6NixdlMsNSOcZpVRomiMyQxP6NBQgUOq3HQRPEPnRhmjIJLmCY0W6u+NFIdKzUPfTOYx1aqXi/95w0QHV27KRJxoKsjyUJBwpCOUt4DGTFKi+dwQTCQzWRGZYomJNl1VTAnO6pfXSa/ZcFqN1n2r3r4u6ijDKZzBBThwCW24hQ50gUACz/AKb9aT9WK9Wx/L0ZJV7JzAH1ifP/bqk0s=</latexit> <latexit sha1_base64="+D8CGvOIy7CmYrNk/7o4p++HCN4=">AAAB+HicbVDLSsNAFL2pr1ofrbp0M1gEVyXRgl0W3LhwUcE+oA1hMp20QyeTMDMRasiXuHGhiFs/xZ1/46TNQlsPDBzOuZd75vgxZ0rb9rdV2tjc2t4p71b29g8Oq7Wj456KEklol0Q8kgMfK8qZoF3NNKeDWFIc+pz2/dlN7vcfqVQsEg96HlM3xBPBAkawNpJXq45CrKcE8/Qu89KrzKvV7Ya9AFonTkHqUKDj1b5G44gkIRWacKzU0LFj7aZYakY4zSqjRNEYkxme0KGhAodUuekieIbOjTJGQSTNExot1N8bKQ6Vmoe+mcxjqlUvF//zhokOWm7KRJxoKsjyUJBwpCOUt4DGTFKi+dwQTCQzWRGZYomJNl1VTAnO6pfXSe+y4TQbzftmvd0q6ijDKZzBBThwDW24hQ50gUACz/AKb9aT9WK9Wx/L0ZJV7JzAH1ifP/Vtk0I=</latexit> <latexit sha1_base64="JUwT8/gTyru9SrG2drc6lK20z7s=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74STu7nfeeLaiEQ94jTlQUxHSkSCUbSSTwc5zgbVmlt3FyDrxCtIDQq0BtWv/jBhWcwVMkmN6XluikFONQom+azSzwxPKZvQEe9ZqmjMTZAvjp2RC6sMSZRoWwrJQv09kdPYmGkc2s6Y4tisenPxP6+XYXQT5EKlGXLFlouiTBJMyPxzMhSaM5RTSyjTwt5K2JhqytDmU7EheKsvr5P2Vd1r1BsPjVrztoijDGdwDpfgwTU04R5a4AMDAc/wCm+Ocl6cd+dj2VpyiplT+APn8wcVMo7f</latexit> <latexit sha1_base64="ImEBLyjR9ZUOer4a7zC7GMRMeS4=">AAAB83icdVDLSgMxFM3UV62vqks3wSK4GmZ0Suuu4MZlBfuAzlAyaaYNzSQhyQhl6G+4caGIW3/GnX9jOq2gogcuHM65l3vviSWj2njeh1NaW9/Y3CpvV3Z29/YPqodHXS0yhUkHCyZUP0aaMMpJx1DDSF8qgtKYkV48vV74vXuiNBX8zswkiVI05jShGBkrhSGRmjLBh7k/H1Zrnlv3gsvGFSxIUG8uyWXdg77rFaiBFdrD6ns4EjhLCTeYIa0HvidNlCNlKGZkXgkzTSTCUzQmA0s5SomO8uLmOTyzyggmQtniBhbq94kcpVrP0th2pshM9G9vIf7lDTKTNKOccpkZwvFyUZIxaARcBABHVBFs2MwShBW1t0I8QQphY2Oq2BC+PoX/k+6F6wducBvUWs1VHGVwAk7BOfBBA7TADWiDDsBAggfwBJ6dzHl0XpzXZWvJWc0cgx9w3j4BqNmSFA==</latexit> <latexit sha1_base64="LUQ4QWpCuaFVM1SHCTj+OB02KXI=">AAAB83icdVDLSgMxFM3UV62vqks3wSK4GqbtlNZdwY3LCvYBnaFk0kwbmklCkhHK0N9w40IRt/6MO//GdFpBRQ9cOJxzL/feE0lGtfG8D6ewsbm1vVPcLe3tHxwelY9PelqkCpMuFkyoQYQ0YZSTrqGGkYFUBCURI/1odr30+/dEaSr4nZlLEiZowmlMMTJWCgIiNWWCj7LaYlSueG7D8+vNK5gTv9FakXrDg1XXy1EBa3RG5fdgLHCaEG4wQ1oPq540YYaUoZiRRSlINZEIz9CEDC3lKCE6zPKbF/DCKmMYC2WLG5ir3ycylGg9TyLbmSAz1b+9pfiXN0xN3AozymVqCMerRXHKoBFwGQAcU0WwYXNLEFbU3grxFCmEjY2pZEP4+hT+T3o1t+q7/q1fabfWcRTBGTgHl6AKmqANbkAHdAEGEjyAJ/DspM6j8+K8rloLznrmFPyA8/YJql6SFQ==</latexit> <latexit sha1_base64="uMxm3PbilOYL+k0Jk7kl6JXG+Os=">AAAB83icdVDLSgMxFM34rPVVdekmWARXw4yd0roruHFZwT6gM5RMmmlDM0lIMkIZ+htuXCji1p9x59+YTiuo6IELh3Pu5d57YsmoNp734aytb2xubZd2yrt7+weHlaPjrhaZwqSDBROqHyNNGOWkY6hhpC8VQWnMSC+eXi/83j1Rmgp+Z2aSRCkac5pQjIyVwpBITZngw7w2H1aqnlv3glrjChYkqDeXpFb3oO96Bapghfaw8h6OBM5Swg1mSOuB70kT5UgZihmZl8NME4nwFI3JwFKOUqKjvLh5Ds+tMoKJULa4gYX6fSJHqdazNLadKTIT/dtbiH95g8wkzSinXGaGcLxclGQMGgEXAcARVQQbNrMEYUXtrRBPkELY2JjKNoSvT+H/pHvp+oEb3AbVVnMVRwmcgjNwAXzQAC1wA9qgAzCQ4AE8gWcncx6dF+d12brmrGZOwA84b5+r45IW</latexit> Multiple choice learning. Multiple choice learning (MCL) [12, 13] is an ensemble method where the objective is to minimize an oracle loss, making at least one ensemble member predict the correct answer. By making the most accurate model optimize the loss, MCL encourages the model to produce multiple outputs of high quality. Even though several optimization algorithms [8, 12] have been proposed for MCL objectives, it is challenging to employ these for learning deep neural networks due to training complexity. To address this problem, Lee et al. [21] proposed a stochastic gradient descent based algorithm. Recently, several methods [19, 29, 43] have been proposed to further improve MCL by tackling the issue of overconfidence problem of neural networks. While most prior works on MCL have focused on supervised learning, our method applies the MCL algorithm to model-based RL. Dynamics generalization and adaptation in model-based RL. Prior dynamics generalization methods have aimed to either encode inductive biases into the architecture [36] or to learn contextual information that captures the local dynamics [20]. Notably, Lee et al. [20] introduced a context encoder that captures dynamics-specific information of environments, and improved the generalization ability by providing a context latent vector as additional inputs. Our method further improves this method by combining multiple choice learning and context learning. For dynamics adaptation, several meta-learning based methods have been studied [30, 31, 35]. Recently, Nagabandi et al. [30] proposed a model-based meta-RL method that adapts to recent experiences either by updating model parameters via a small number of gradient updates [11] or by updating hidden representations of a recurrent model [10]. Our method differs from this method, in that we do not fine-tune the model parameters to adapt to new environments at evaluation time. 3 Problem setup We consider the standard RL framework where an agent optimizes a specified reward function through interacting with an environment. Formally, we formulate our problem as a discrete-time Markov decision process (MDP) [41], which is defined as a tuple (S,A, p, r, γ, ρ0). Here, S is the state space, A is the action space, p (s′|s, a) is the transition dynamics, r (s, a) is the reward function, ρ0 is the initial state distribution, and γ ∈ [0, 1) is the discount factor. The goal of RL is to obtain a policy, mapping from states to actions, that maximizes the expected return defined as the total accumulated reward. We tackle this problem in the context of model-based RL by learning a forward dynamics model f , which approximates the transition dynamics p (s′|s, a). Then, dynamics model f is used to provide training data for a policy or predict the future consequences of actions for planning. In order to address the problem of generalization, we further consider the distribution of MDPs, where the transition dynamics pc (s′|s, a) varies according to a context c. For instance, a robot agent’s transition dynamics may change when some of its parts malfunction due to unexpected damages. Our goal is to learn a generalizable forward dynamics model that is robust to such dynamics changes, i.e., approximating the multi-modal distribution of transition dynamics p (s′|s, a) = ∫ c p (c) pc (s ′|s, a). Specifically, given a set of training environments with contexts sampled from ptrain(c), we aim to learn a forward dynamics model that can produce accurate predictions for both training environments and test environments with unseen (but related) contexts sampled from ptest(c). Algorithm 1 Trajectory-wise MCL (T-MCL). Initialize parameters of backbone network θ, prediction heads {θheadh }Hh=1, context encoder φ. Initialize dataset B ← ∅. for each iteration do Sample c ∼ pseen (c). // ENVIRONMENT INTERACTION for t = 1 to TaskHorizon do Get context latent vector zt = g ( τ Pt,K ;φ ) and select the best prediction head h∗ from (3) Collect {(st, at, st+1, rt, τ Pt,K)} from environment with transition dynamics pc using h∗. Update B ← B ∪ {(st, at, st+1, rt, τ Pt,K)}. end for Initialize Ltot ← 0 // DYNAMICS AND CONTEXT LEARNING Sample {τPtj ,K , τFtj ,M}Bj=1 ∼ B for j = 1 to B do for h = 1 to H do Compute the loss of the h-th prediction head: Lhj ← − 1 M tj+M−1∑ i=tj log f ( si+1 | b(si, ai; θ), g(τ Pi,K ;φ); θheadh ) end for Find h∗ = argminh∈[H] L h j and update Ltot ← Ltot + Lh ∗ j end for Update θ, φ, {θheadh }Hh=1 ← ∇θ,φ,{θheadh }Hh=1Ltot end for 4 Trajectory-wise multiple choice learning In this section, we propose a trajectory-wise multiple choice learning (T-MCL) that learns a multiheaded dynamics model for dynamics generalization. We first present a trajectory-wise oracle loss for making each prediction head specialize in different environments, and then introduce a contextconditional prediction head to further improve generalization. Finally, we propose an adaptive planning method that generates actions by planning under the most accurate prediction head over a recent experience for planning. 4.1 Trajectory-wise oracle loss for multi-headed dynamics model To approximate the multi-modal distribution of transition dynamics, we introduce a multi-headed dynamics model {f (st+1|b(st, at; θ); θheadh )}Hh=1 that consists of a backbone network b parameterized by θ and H prediction heads parameterized by {θheadh }Hh=1 (see Figure 2a). To make each prediction head specialize in different environments, we propose a trajectory-wise oracle defined as follows: LT-MCL = Eτ Ft,M∼B [ min h∈[H] − 1 M t+M−1∑ i=t log f ( si+1 | b(si, ai; θ); θheadh )] , (1) where [H] is the set {1, · · · , H}, τ Ft,M = (st, at, · · · , st+M−1, at+M−1, st+M ) denotes a trajectory segment of size M , and B = {τ Ft,M} is the training dataset. The proposed loss is designed to only update the most accurate prediction head over each trajectory segment for specialization (see Figure 2b). By considering the accumulated prediction error over trajectory segments, the proposed oracle loss can assign trajectories from different transition dynamics to different transition heads more distinctively (see Figure 4 for supporting experimental results). Namely, our method clusters environments in an unsupervised manner. We also remark that the shared backbone network learns common features across all environments, which provides several advantages, such as improving sample-efficiency and reducing computational costs. 4.2 Context-conditional multi-headed dynamics model To further enable the dynamics model to perform online adaptation to unseen environments, we introduce a context encoder g parameterized by φ, which produces a latent vector g ( τ Pt,K ;φ ) givenK past transitions (st−K , at−K , · · · , st−1, at−1). This context encoder operates under the assumption that the true context of the underlying MDP can be captured from recent experiences [20, 30, 34, 48]. Using this context encoder, we propose to learn a context-conditional multi-headed dynamics model optimized by minimizing the following oracle loss: LT-MCLcontext = E(τ Ft,M ,τ Pt,K)∼B [ min h∈[H] − 1 M t+M−1∑ i=t log f ( si+1 | b(si, ai; θ), g(τ Pi,K ;φ); θheadh )] . (2) We remark that the dynamics generalization of T-MCL can be enhanced by incorporating the contextual information into the dynamics model for enabling its online adaptation. To extract more meaningful contextual information, we also utilize various auxiliary prediction losses proposed in Lee et al. [20] (see the supplementary material for more details). 4.3 Adaptive planning Once a multi-headed dynamics model is learned, it can be used for selecting actions by planning. Since the performance of planning depends on the quality of predictions, it is important to select the prediction head specialized in the current environment for planning. To this end, following the idea of Narendra & Balakrishnan [32], we propose an adaptive planning method that selects the most accurate prediction head over a recent experience (see Figure 2c). Formally, given N past transitions, we select the prediction head h∗ as follows: argmin h∈[H] t−2∑ i=t−N ` ( si+1, f ( b(si, ai), g(τ P i,K ;φ); θ head h )) , (3) where ` is the mean squared error function. One can note that this planning method corresponds to finding the nearest cluster to the current environment. We empirically show that this adaptive planning significantly improves the performance by selecting the prediction head specialized in a specific environment (see Figure 6b for supporting experimental results). 5 Experiments In this section, we designed our experiments to answer the following questions: • How does our method compare to existing model-based RL methods and state-of-the-art modelfree meta-RL method (see Figure 3)? • Can prediction heads be specialized for a certain subset of training environments with similar dynamics (see Figure 4 and Figure 5)? • Is the multi-headed architecture useful for dynamics generalization of other model-based RL methods (see Figure 6a)? • Does adaptive planning improve generalization performance (see Figure 6b)? • Can T-MCL extract meaningful contextual information from complex environments (see Figure 6c and Figure 6d)? 5.1 Setups Environments. We demonstrate the effectiveness of our proposed method on classic control problems (i.e., CartPoleSwingUp and Pendulum) from OpenAI Gym [3] and simulated robotic continuous tasks (i.e., Hopper, SlimHumanoid, HalfCheetah, and CrippledAnt) from MuJoCo physics engine [44]. To evaluate the generalization performance, we designed environments to follow a multi-modal distribution by changing the environment parameters (e.g., length and mass) similar to Packer et al. [33] and Zhou et al. [48]. We use the two predefined discrete set of environment parameters for training and test environments, where parameters for test environments are outside the range of training parameters. Then, we learn a dynamics model on environments whose transition dynamics are characterized by the environment parameters randomly sampled before the episode starts. For evaluation, we report the performance of a trained dynamics model on unseen environments whose environment parameters are randomly sampled from the test parameter set. Similar to prior works [4, 30, 46], we assume that the reward function of environments is known, i.e., ground-truth rewards at the predicted states are available for planning. For all our experiments, we report the mean and standard deviation across three runs. We provide more details in the supplementary material. Planning. We use a model predictive control (MPC) [27] to select actions based on the learned dynamics model. Specifically, we use the cross entropy method (CEM) [6] to optimize action sequences by iteratively re-sampling action sequences near the best performing action sequences from the last iteration. Implementation details of T-MCL. For all experiments, we use an ensemble of multi-headed dynamics models that are independently optimized with the trajectory-wise oracle loss. We reduce modeling errors by training multiple dynamics models [4]. To construct a trajectory segment τ Ft,M in (1), we use M transitions randomly sampled from a trajectory instead of consecutive transitions (st, at, · · · , st+M ). We empirically found that this stabilizes the training, by breaking the temporal correlations of the training data. We also remark that the same hyperparameters are used for all experiments except Pendulum which has a short task horizon. We provide more details in the supplementary material. Baselines. To evaluate the performance of our method, we consider following model-based and model-free RL methods: • Probabilistic ensemble dynamics model (PETS) [4]: an ensemble of probabilistic dynamics models that captures the uncertainty in modeling and planning. PETS employs ensembles of single-headed dynamics models optimized to cover all training environments, while T-MCL employs ensembles of multi-headed dynamics models specialized to a subset of environments. 95.3 4.7 0.0 0.0 100.0 0.0 0.0 0.2 99.8 0.0 0.0 100.0 0.50 1.50 2.50 Head 1 Head 2 Head 3 0.25 Mass (a) CartPoleSwingUp 100.0 0.0 0.0 0.0 0.0 100.0 10.1 89.9 0.0 0.1 99.9 0.0 0.75 1.0 1.25 Head 1 Head 2 Head 3 0.50 Length (b) Pendulum 100.0 0.0 0.0 73.6 26.4 0.0 2.0 2.1 95.9 0.7 86.9 12.4 0.50 1.50 2.50 Head 1 Head 2 Head 3 0.25 Mass (c) HalfCheetah (a) Multiple choice learning (MCL) (b) Trajectory-wise MCL (T-MCL) (c) Generalization performance 5.2 Comparative evaluation on control tasks Figure 3 shows the generalization performances of our method and baseline methods on unseen environments (see the supplementary material for training curve plots). Our method significantly outperforms all model-based RL baselines in all environments. In particular, T-MCL achieves the average return of 19280.1 on HalfCheetah environments while that of PETS is 2223.9. This result demonstrates that our method is more effective for dynamics generalization, compared to the independent ensemble of dynamics models. On the other hand, model-based meta-RL methods (ReBAL and GrBAL) do not exhibit significant performance gain over PETS, which shows the difficulty of adapting a dynamics model to unseen environments via meta-learning. We also remark that CaDM does not consistently improve over PETS, due to the difficulty in context learning with a discrete number of training environments. We observed that T-MCL sometimes reaches the performance of PEARL or even outperforms it in terms of both sample-efficiency and asymptotic performance. This result demonstrates the effectiveness of our method for dynamics generalization, especially given that PEARL adapts to test environments by collecting trajectories at evaluation time. PETS Multi-Headed PETS Av er ag e R et ur n 0 1000 2000 3000 4000 5000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (a) Multi-head architecture T-MCL T-MCL (Non-adaptive) Av er ag e R et ur n 0 5,000 10,000 15,000 20,000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (b) Adaptive planning PETS T-MCL T-MCL (No Context) Av er ag e R et ur n 0 5,000 10,000 15,000 20,000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (c) Context learning Mass = 0.25 Mass = 0.50 Mass = 1.50 Mass = 2.50 (d) t-SNE visualization Figure 6: (a) Generalization performance of PETS and Multi-Headed PETS on unseen HalfCheetah environments. (b) We compare the generalization performance of adaptive planning to non-adaptive planning on unseen HalfCheetah environments. (c) Generalization performance of trained dynamics models on unseen HalfCheetah environments. One can observe that T-MCL still outperforms PETS without context learning, but this results in a significant performance drop. (d) t-SNE visualization of hidden features of context-conditional multi-headed dynamics model on HalfCheetah environments. 5.3 Analysis Specialization. To investigate the ability of our method to learn specialized prediction heads, we visualize how training trajectories are assigned to each head in Figure 4. One can observe that trajectories are distinctively assigned to prediction heads, while trajectories from environments with similar transition dynamics are assigned to the same prediction head. For example, we discover that the transition dynamics of Pendulum with length 1.0 and 1.25 are more similar to each other than Pendulum with other lengths (see the supplementary material for supporting figures), which implies that our method can cluster environments in an unsupervised manner. Effects of trajectory-wise loss. To further investigate the effectiveness of trajectory-wise oracle loss, we compare our method to MCL, where we consider only a single transition for selecting the model to optimize, i.e., M = 1 in (1). Figure 5a and Figure 5b show that training trajectories are more distinctively assigned to each head when we use T-MCL, which implies that trajectory-wise loss is indeed important for learning specialized prediction heads. Also, as shown in Figure 5c, this leads to superior generalization performance over the dynamics model trained with MCL, showing that learning specialized prediction heads improves the generalization performance. Effects of multi-headed dynamics model. We also analyze the isolated effect of employing multiheaded architecture on the generalization performance. To this end, we train the multi-headed version of PETS, i.e., ensemble of multi-headed dynamics models without trajectory-wise oracle loss, context learning, and adaptive planning. Figure 6a shows that multi-headed PETS does not improve the performance of vanilla PETS on HalfCheetah environments, which demonstrates the importance of training with trajectory-wise oracle loss and adaptively selecting the most accurate prediction head for achieving superior generalization performance of our method. Effects of adaptive planning. We investigate the importance of selecting the specialized prediction head adaptively. Specifically, we compare the performance of employing the proposed adaptive planning method to the performance of employing non-adaptive planning, i.e., planning with the average predictions of prediction heads. As shown in Figure 6b, the gain due to adaptive planning is significant, which confirms that proposed adaptive planning is important. Effects of context learning. We examine our choice of integrating context learning by comparing the performance of a context-conditional multi-headed dynamics model to the performance of a multi-headed dynamics model. As shown in Figure 6c, removing context learning scheme from the T-MCL results in steep performance degradation, which demonstrates the importance of incorporating contextual information. However, we remark that the T-MCL still outperforms PETS without context learning scheme. Also, we visualize the hidden features of a context-conditional multi-headed dynamics model on HalfCheetah environments using t-SNE [26] in Figure 6d. One can observe that features from the environments with different transition dynamics are separated in the embedding space, which implies that our method indeed learns meaningful contextual information. Effects of hyperparameters. Finally, we investigate how hyperparameters affect the performance of T-MCL. Specifically, we consider three hyperparameters, i.e., H ∈ {2, 3, 4, 5, 8} for the number of prediction heads in (2), M ∈ {1, 5, 10, 20, 30} for the horizon of trajectory-wise oracle loss in (2), and N ∈ {1, 5, 10, 20, 30} for the horizon of adaptive planning in (3). Figure 7a shows that H = 3 achieves the best performance because three prediction heads are enough to capture the multi-modality of the training environments in our setting. When H > 3, the performance decreases because trajectories from similar dynamics are split into multiple heads. Figure 7b and Figure 7c show that our method is robust to the horizons M,N , and considering more transitions can further improve the performance. We provide results for all environments in the supplementary material. 6 Conclusion In this work, we present trajectory-wise multiple choice learning, a new model-based RL algorithm that learns a multi-headed dynamics model for dynamics generalization. Our method consists of three key ingredients: (a) trajectory-wise oracle loss for multi-headed dynamics model, (b) contextconditional multi-headed dynamics model, and (c) adaptive planning. We show that our method can capture the multi-modal nature of environments in an unsupervised manner, and outperform existing model-based RL methods. Overall, we believe our approach would further strengthen the understanding of dynamics generalization and could be useful to other relevant topics such as model-based policy optimization methods [15, 16]. Broader Impact While deep reinforcement learning (RL) has been successful in a range of challenging domains, it still suffers from a lack of generalization ability to unexpected changes in surrounding environmental factors [20, 30]. This failure of autonomous agents to generalize across diverse environments is one of the major reasoning behind the objection to real-world deployment of RL agents. To tackle this problem, in this paper, we focus on developing more robust and generalizable RL algorithm, which could improve the applicability of deep RL to various real-world applications, such as robotics manipulation [17] and package delivery [2]. Such advances in the robustness of RL algorithm could contribute to improved productivity of society via the safe and efficient utilization of autonomous agents in a diverse range of industries. Unfortunately, however, we could also foresee the negative long-term consequences of deploying autonomous systems in the real-world. For example, autonomous agents could be abused by specifying harmful objectives such as autonomous weapons. While such malicious usage of autonomous agents was available long before the advent of RL algorithms, developing an RL algorithm for dynamics generalization may accelerate the real-world deployment of such malicious robots, e.g., autonomous drones loaded with explosives, by making them more robust to changing dynamics or defense systems. We would like to recommend the researchers to recognize this potential misuse as we further improve RL systems. Acknowledgments and Disclosure of Funding We thank Junsu Kim, Seunghyun Lee, Jongjin Park, Sihyun Yu, and our anonymous reviewers for feedback and discussions. This research is supported in part by ONR PECASE N000141612723, Tencent, Berkeley Deep Drive, Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)), and Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF2018R1A5A1059921).
1. What is the focus and contribution of the paper on multi-headed dynamics models? 2. What are the strengths of the proposed approach, particularly in terms of its performance and analysis? 3. What are the weaknesses of the paper, especially regarding the choice of heads and their impact on the results? 4. Do you have any concerns about the proposed method's ability to handle complex dynamics modeling tasks? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions - This paper proposes to use multi-headed dynamics models to better represent the multi-modal nature of dynamics modeling - They evaluate their proposed method on a suite of classical control tasks using MPC for planning/control - Their proposed method significantly outperforms existing approaches - Their analysis shows that specialization emerges amount of multiple prediction heads Strengths - The proposed model is a sensible and logical combination of multi-choice learning and dynamics model learning - The proposed method is either on-par with or considerably outperforms existing approaches - The analysis and ablations of the proposed model are thorough and interesting Weaknesses - I would like to see an ablation on the number of heads used. - When choosing the head, should it be done per-step or per-trajectory? The experiment with M=1 somewhat answers this question but also entangles the optimization process with it.
NIPS
Title Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning Abstract Model-based reinforcement learning (RL) has shown great potential in various control tasks in terms of both sample-efficiency and final performance. However, learning a generalizable dynamics model robust to changes in dynamics remains a challenge since the target transition dynamics follow a multi-modal distribution. In this paper, we present a new model-based RL algorithm, coined trajectory-wise multiple choice learning, that learns a multi-headed dynamics model for dynamics generalization. The main idea is updating the most accurate prediction head to specialize each head in certain environments with similar dynamics, i.e., clustering environments. Moreover, we incorporate context learning, which encodes dynamicsspecific information from past experiences into the context latent vector, enabling the model to perform online adaptation to unseen environments. Finally, to utilize the specialized prediction heads more effectively, we propose an adaptive planning method, which selects the most accurate prediction head over a recent experience. Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods. Source code and videos are available at https://sites.google.com/view/trajectory-mcl. 1 Introduction Deep reinforcement learning (RL) has exhibited wide success in solving sequential decision-making problems [23, 39, 45]. Early successful deep RL approaches had been mostly model-free, which do not require an explicit model of the environment, but instead directly learn a policy [25, 28, 38]. However, despite the strong asymptotic performance, the applications of model-free RL have largely been limited to simulated domains due to its high sample complexity. For this reason, model-based RL has been gaining considerable attention as a sample-efficient alternative, with an eye towards robotics and other physics domains. The increased sample-efficiency of model-based RL algorithms is obtained by exploiting the structure of the problem: first the agent learns a predictive model of the environment, and then plans ahead with the learned model [1, 37, 42]. Recently, substantial progress has been made on the sample-efficiency of model-based RL algorithms [5, 7, 22, 23, 24]. However, it has been evidenced that model-based RL algorithms are not robust to changes in the dynamics [20, 30], i.e., dynamics models fail to provide accurate predictions as the transition dynamics of environments change. This makes model-based RL algorithms unreliable to be deployed into real-world environments where partially unspecified dynamics are common; for instance, a deployed robot might not know a priori various features of the terrain it has to navigate. ∗Equal Contribution. Correspondence to {[email protected], [email protected]} 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. ar X iv :2 01 0. 13 30 3v 1 [ cs .L G ] 2 6 O ct 2 As a motivating example, we visualize the next states obtained by crippling one of the legs of an ant robot (see Figure 1a). Figure 1b shows that the target transition dynamics follow a multimodal distribution, where each mode corresponds to each leg of a robot, even though the original environment has deterministic transition dynamics. This implies that a model-based RL algorithm that can approximate the multi-modal distribution is required to develop a reliable and robust agent against changes in the dynamics. Several algorithms have been proposed to tackle this problem, e.g., learning contextual information to capture local dynamics [20], fine-tuning model parameters for fast adaptation [30]. These algorithms, however, are limited in that they do not explicitly learn dynamics models that can approximate the multi-modal distribution of transition dynamics. Contribution. In this paper, we present a new model-based RL algorithm, coined trajectory-wise multiple choice learning (T-MCL), that can approximate the multi-modal distribution of transition dynamics in an unsupervised manner. To this end, we introduce a novel loss function, trajectory-wise oracle loss, for learning a multi-headed dynamics model where each prediction head specializes in different environments (see Figure 2a). By updating the most accurate prediction head over a trajectory segment (see Figure 2b), we discover that specialized prediction heads emerge automatically. Namely, our method can effectively cluster environments without any prior knowledge of environments. To further enable the model to perform online adaptation to unseen environments, we also incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector and provides it as an additional input to prediction heads (see Figure 2a). Finally, to utilize the specialized prediction heads more effectively, we propose adaptive planning that selects actions using the most accurate prediction head over a recent experience, which can be interpreted as finding the nearest cluster to the current environment (see Figure 2c). We demonstrate the effectiveness of T-MCL on various control tasks from OpenAI Gym [3]. For evaluation, we measure the generalization performance of model-based RL agents on unseen (yet related) environments with different transition dynamics. In our experiments, T-MCL exhibits superior generalization performance compared to existing model-based RL methods [4, 20, 30]. For example, compared to CaDM [20], a state-of-the-art model-based RL method for dynamics generalization, our method obtains 3.5x higher average return on the CrippledAnt environment. 2 Related work Model-based reinforcement learning. By learning a forward dynamics model that approximates the transition dynamics of environments, model-based RL attains a superior sample-efficiency. Such a learned dynamics model can be used as a simulator for model-free RL methods [16, 18, 40], providing a prior or additional features to a policy [9, 47], or planning ahead to select actions by predicting the future consequences of actions [1, 22, 42]. A major challenge in model-based RL is to learn accurate dynamics models that can provide correct future predictions. To this end, numerous methods thus have been proposed, including ensembles [4] and latent dynamics models [14, 15, 37]. While these methods have made significant progress even in complex domains [15, 37], dynamics models still struggle to provide accurate predictions on unseen environments [20, 30]. <latexit sha1_base64="JUwT8/gTyru9SrG2drc6lK20z7s=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74STu7nfeeLaiEQ94jTlQUxHSkSCUbSSTwc5zgbVmlt3FyDrxCtIDQq0BtWv/jBhWcwVMkmN6XluikFONQom+azSzwxPKZvQEe9ZqmjMTZAvjp2RC6sMSZRoWwrJQv09kdPYmGkc2s6Y4tisenPxP6+XYXQT5EKlGXLFlouiTBJMyPxzMhSaM5RTSyjTwt5K2JhqytDmU7EheKsvr5P2Vd1r1BsPjVrztoijDGdwDpfgwTU04R5a4AMDAc/wCm+Ocl6cd+dj2VpyiplT+APn8wcVMo7f</latexit> <latexit sha1_base64="CN/x1zI9SF064ojut25WYcKBezg=">AAAB+HicbVDLSsNAFL3xWeujUZduBovgqiRSsMuCGxcuKtgHtCFMppN26GQSZiZCDfkSNy4UceunuPNvnLRZaOuBgcM593LPnCDhTGnH+bY2Nre2d3Yre9X9g8Ojmn180lNxKgntkpjHchBgRTkTtKuZ5nSQSIqjgNN+MLsp/P4jlYrF4kHPE+pFeCJYyAjWRvLt2ijCekowz+5yP3Nz3647DWcBtE7cktShRMe3v0bjmKQRFZpwrNTQdRLtZVhqRjjNq6NU0QSTGZ7QoaECR1R52SJ4ji6MMkZhLM0TGi3U3xsZjpSaR4GZLGKqVa8Q//OGqQ5bXsZEkmoqyPJQmHKkY1S0gMZMUqL53BBMJDNZEZliiYk2XVVNCe7ql9dJ76rhNhvN+2a93SrrqMAZnMMluHANbbiFDnSBQArP8Apv1pP1Yr1bH8vRDavcOYU/sD5/APJjk0A=</latexit> <latexit sha1_base64="eJteVk3jzTJdLreT95xXMrghIyU=">AAACIXicdVBLS8NAGNzUV62vqEcvi0XwICWJKdZb0YsHDxXsA9pQNttNu3TzYHcjlJC/4sW/4sWDIr2Jf8ZN2oCKDiwMM/PtfjtuxKiQhvGhlVZW19Y3ypuVre2d3T19/6Ajwphj0sYhC3nPRYIwGpC2pJKRXsQJ8l1Guu70OvO7D4QLGgb3chYRx0fjgHoUI6mkod5IBvklfT52ncSo1Rt1q26d5eTcthfEMi/TgY/kBCOW3KbDxErToV4t0rBIwyINzZqRowqWaA31+WAU4tgngcQMCdE3jUg6CeKSYkbSyiAWJEJ4isakr2iAfCKcJF8uhSdKGUEv5OoEEubq94kE+ULMfFclsz3Fby8T//L6sfQaTkKDKJYkwIuHvJhBGcKsLjiinGDJZoogzKnaFeIJ4ghLVWpFlVD8FP5POlbNtGv2nV1tXi3rKIMjcAxOgQkuQBPcgBZoAwwewTN4BW/ak/aivWvzRbSkLWcOwQ9on1+SJ6DB</latexit> <latexit sha1_base64="a7txUJr+FjoRJKxtux2ljCScSjE=">AAACIXicdVBLS8NAGNzUV62vqkcvi0XwICFpK423ghcPHirYB6ShbLabdunmwe5GKCF/xYt/xYsHRXoT/4ybtAEVHVgYZubb/XbciFEhDeNDK62tb2xulbcrO7t7+wfVw6OeCGOOSReHLOQDFwnCaEC6kkpGBhEnyHcZ6buz68zvPxAuaBjcy3lEHB9NAupRjKSSRlUrGeaX2HziOomhX5r1VvPqIidWw1iSVt1Ihz6SU4xYcpuOkkaajqq1Ig2LNCzS0NSNHDWwQmdUXQzHIY59EkjMkBC2aUTSSRCXFDOSVoaxIBHCMzQhtqIB8olwkny5FJ4pZQy9kKsTSJir3ycS5Asx912VzPYUv71M/MuzY+lZTkKDKJYkwMuHvJhBGcKsLjimnGDJ5oogzKnaFeIp4ghLVWpFlVD8FP5PenXdbOrNu2atba3qKIMTcArOgQlaoA1uQAd0AQaP4Bm8gjftSXvR3rXFMlrSVjPH4Ae0zy9Nz6CM</latexit> <latexit sha1_base64="CN/x1zI9SF064ojut25WYcKBezg=">AAAB+HicbVDLSsNAFL3xWeujUZduBovgqiRSsMuCGxcuKtgHtCFMppN26GQSZiZCDfkSNy4UceunuPNvnLRZaOuBgcM593LPnCDhTGnH+bY2Nre2d3Yre9X9g8Ojmn180lNxKgntkpjHchBgRTkTtKuZ5nSQSIqjgNN+MLsp/P4jlYrF4kHPE+pFeCJYyAjWRvLt2ijCekowz+5yP3Nz3647DWcBtE7cktShRMe3v0bjmKQRFZpwrNTQdRLtZVhqRjjNq6NU0QSTGZ7QoaECR1R52SJ4ji6MMkZhLM0TGi3U3xsZjpSaR4GZLGKqVa8Q//OGqQ5bXsZEkmoqyPJQmHKkY1S0gMZMUqL53BBMJDNZEZliiYk2XVVNCe7ql9dJ76rhNhvN+2a93SrrqMAZnMMluHANbbiFDnSBQArP8Apv1pP1Yr1bH8vRDavcOYU/sD5/APJjk0A=</latexit> <latexit sha1_base64="LbZTPyHPkoBlIZO+cqz4WbvTbMg=">AAAB+HicbVDLSsNAFL2pr1ofrbp0M1gEVyUpBV0W3bhwUcE+oA1hMp20QyeTMDMRasiXuHGhiFs/xZ1/46TNQlsPDBzOuZd75vgxZ0rb9rdV2tjc2t4p71b29g8Oq7Wj456KEklol0Q8kgMfK8qZoF3NNKeDWFIc+pz2/dlN7vcfqVQsEg96HlM3xBPBAkawNpJXq45CrKcE8/Qu89Jm5tXqdsNeAK0TpyB1KNDxal+jcUSSkApNOFZq6NixdlMsNSOcZpVRomiMyQxP6NBQgUOq3HQRPEPnRhmjIJLmCY0W6u+NFIdKzUPfTOYx1aqXi/95w0QHV27KRJxoKsjyUJBwpCOUt4DGTFKi+dwQTCQzWRGZYomJNl1VTAnO6pfXSa/ZcFqN1n2r3r4u6ijDKZzBBThwCW24hQ50gUACz/AKb9aT9WK9Wx/L0ZJV7JzAH1ifP/bqk0s=</latexit> <latexit sha1_base64="+D8CGvOIy7CmYrNk/7o4p++HCN4=">AAAB+HicbVDLSsNAFL2pr1ofrbp0M1gEVyXRgl0W3LhwUcE+oA1hMp20QyeTMDMRasiXuHGhiFs/xZ1/46TNQlsPDBzOuZd75vgxZ0rb9rdV2tjc2t4p71b29g8Oq7Wj456KEklol0Q8kgMfK8qZoF3NNKeDWFIc+pz2/dlN7vcfqVQsEg96HlM3xBPBAkawNpJXq45CrKcE8/Qu89KrzKvV7Ya9AFonTkHqUKDj1b5G44gkIRWacKzU0LFj7aZYakY4zSqjRNEYkxme0KGhAodUuekieIbOjTJGQSTNExot1N8bKQ6Vmoe+mcxjqlUvF//zhokOWm7KRJxoKsjyUJBwpCOUt4DGTFKi+dwQTCQzWRGZYomJNl1VTAnO6pfXSe+y4TQbzftmvd0q6ijDKZzBBThwDW24hQ50gUACz/AKb9aT9WK9Wx/L0ZJV7JzAH1ifP/Vtk0I=</latexit> <latexit sha1_base64="JUwT8/gTyru9SrG2drc6lK20z7s=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74STu7nfeeLaiEQ94jTlQUxHSkSCUbSSTwc5zgbVmlt3FyDrxCtIDQq0BtWv/jBhWcwVMkmN6XluikFONQom+azSzwxPKZvQEe9ZqmjMTZAvjp2RC6sMSZRoWwrJQv09kdPYmGkc2s6Y4tisenPxP6+XYXQT5EKlGXLFlouiTBJMyPxzMhSaM5RTSyjTwt5K2JhqytDmU7EheKsvr5P2Vd1r1BsPjVrztoijDGdwDpfgwTU04R5a4AMDAc/wCm+Ocl6cd+dj2VpyiplT+APn8wcVMo7f</latexit> <latexit sha1_base64="ImEBLyjR9ZUOer4a7zC7GMRMeS4=">AAAB83icdVDLSgMxFM3UV62vqks3wSK4GmZ0Suuu4MZlBfuAzlAyaaYNzSQhyQhl6G+4caGIW3/GnX9jOq2gogcuHM65l3vviSWj2njeh1NaW9/Y3CpvV3Z29/YPqodHXS0yhUkHCyZUP0aaMMpJx1DDSF8qgtKYkV48vV74vXuiNBX8zswkiVI05jShGBkrhSGRmjLBh7k/H1Zrnlv3gsvGFSxIUG8uyWXdg77rFaiBFdrD6ns4EjhLCTeYIa0HvidNlCNlKGZkXgkzTSTCUzQmA0s5SomO8uLmOTyzyggmQtniBhbq94kcpVrP0th2pshM9G9vIf7lDTKTNKOccpkZwvFyUZIxaARcBABHVBFs2MwShBW1t0I8QQphY2Oq2BC+PoX/k+6F6wducBvUWs1VHGVwAk7BOfBBA7TADWiDDsBAggfwBJ6dzHl0XpzXZWvJWc0cgx9w3j4BqNmSFA==</latexit> <latexit sha1_base64="LUQ4QWpCuaFVM1SHCTj+OB02KXI=">AAAB83icdVDLSgMxFM3UV62vqks3wSK4GqbtlNZdwY3LCvYBnaFk0kwbmklCkhHK0N9w40IRt/6MO//GdFpBRQ9cOJxzL/feE0lGtfG8D6ewsbm1vVPcLe3tHxwelY9PelqkCpMuFkyoQYQ0YZSTrqGGkYFUBCURI/1odr30+/dEaSr4nZlLEiZowmlMMTJWCgIiNWWCj7LaYlSueG7D8+vNK5gTv9FakXrDg1XXy1EBa3RG5fdgLHCaEG4wQ1oPq540YYaUoZiRRSlINZEIz9CEDC3lKCE6zPKbF/DCKmMYC2WLG5ir3ycylGg9TyLbmSAz1b+9pfiXN0xN3AozymVqCMerRXHKoBFwGQAcU0WwYXNLEFbU3grxFCmEjY2pZEP4+hT+T3o1t+q7/q1fabfWcRTBGTgHl6AKmqANbkAHdAEGEjyAJ/DspM6j8+K8rloLznrmFPyA8/YJql6SFQ==</latexit> <latexit sha1_base64="uMxm3PbilOYL+k0Jk7kl6JXG+Os=">AAAB83icdVDLSgMxFM34rPVVdekmWARXw4yd0roruHFZwT6gM5RMmmlDM0lIMkIZ+htuXCji1p9x59+YTiuo6IELh3Pu5d57YsmoNp734aytb2xubZd2yrt7+weHlaPjrhaZwqSDBROqHyNNGOWkY6hhpC8VQWnMSC+eXi/83j1Rmgp+Z2aSRCkac5pQjIyVwpBITZngw7w2H1aqnlv3glrjChYkqDeXpFb3oO96Bapghfaw8h6OBM5Swg1mSOuB70kT5UgZihmZl8NME4nwFI3JwFKOUqKjvLh5Ds+tMoKJULa4gYX6fSJHqdazNLadKTIT/dtbiH95g8wkzSinXGaGcLxclGQMGgEXAcARVQQbNrMEYUXtrRBPkELY2JjKNoSvT+H/pHvp+oEb3AbVVnMVRwmcgjNwAXzQAC1wA9qgAzCQ4AE8gWcncx6dF+d12brmrGZOwA84b5+r45IW</latexit> Multiple choice learning. Multiple choice learning (MCL) [12, 13] is an ensemble method where the objective is to minimize an oracle loss, making at least one ensemble member predict the correct answer. By making the most accurate model optimize the loss, MCL encourages the model to produce multiple outputs of high quality. Even though several optimization algorithms [8, 12] have been proposed for MCL objectives, it is challenging to employ these for learning deep neural networks due to training complexity. To address this problem, Lee et al. [21] proposed a stochastic gradient descent based algorithm. Recently, several methods [19, 29, 43] have been proposed to further improve MCL by tackling the issue of overconfidence problem of neural networks. While most prior works on MCL have focused on supervised learning, our method applies the MCL algorithm to model-based RL. Dynamics generalization and adaptation in model-based RL. Prior dynamics generalization methods have aimed to either encode inductive biases into the architecture [36] or to learn contextual information that captures the local dynamics [20]. Notably, Lee et al. [20] introduced a context encoder that captures dynamics-specific information of environments, and improved the generalization ability by providing a context latent vector as additional inputs. Our method further improves this method by combining multiple choice learning and context learning. For dynamics adaptation, several meta-learning based methods have been studied [30, 31, 35]. Recently, Nagabandi et al. [30] proposed a model-based meta-RL method that adapts to recent experiences either by updating model parameters via a small number of gradient updates [11] or by updating hidden representations of a recurrent model [10]. Our method differs from this method, in that we do not fine-tune the model parameters to adapt to new environments at evaluation time. 3 Problem setup We consider the standard RL framework where an agent optimizes a specified reward function through interacting with an environment. Formally, we formulate our problem as a discrete-time Markov decision process (MDP) [41], which is defined as a tuple (S,A, p, r, γ, ρ0). Here, S is the state space, A is the action space, p (s′|s, a) is the transition dynamics, r (s, a) is the reward function, ρ0 is the initial state distribution, and γ ∈ [0, 1) is the discount factor. The goal of RL is to obtain a policy, mapping from states to actions, that maximizes the expected return defined as the total accumulated reward. We tackle this problem in the context of model-based RL by learning a forward dynamics model f , which approximates the transition dynamics p (s′|s, a). Then, dynamics model f is used to provide training data for a policy or predict the future consequences of actions for planning. In order to address the problem of generalization, we further consider the distribution of MDPs, where the transition dynamics pc (s′|s, a) varies according to a context c. For instance, a robot agent’s transition dynamics may change when some of its parts malfunction due to unexpected damages. Our goal is to learn a generalizable forward dynamics model that is robust to such dynamics changes, i.e., approximating the multi-modal distribution of transition dynamics p (s′|s, a) = ∫ c p (c) pc (s ′|s, a). Specifically, given a set of training environments with contexts sampled from ptrain(c), we aim to learn a forward dynamics model that can produce accurate predictions for both training environments and test environments with unseen (but related) contexts sampled from ptest(c). Algorithm 1 Trajectory-wise MCL (T-MCL). Initialize parameters of backbone network θ, prediction heads {θheadh }Hh=1, context encoder φ. Initialize dataset B ← ∅. for each iteration do Sample c ∼ pseen (c). // ENVIRONMENT INTERACTION for t = 1 to TaskHorizon do Get context latent vector zt = g ( τ Pt,K ;φ ) and select the best prediction head h∗ from (3) Collect {(st, at, st+1, rt, τ Pt,K)} from environment with transition dynamics pc using h∗. Update B ← B ∪ {(st, at, st+1, rt, τ Pt,K)}. end for Initialize Ltot ← 0 // DYNAMICS AND CONTEXT LEARNING Sample {τPtj ,K , τFtj ,M}Bj=1 ∼ B for j = 1 to B do for h = 1 to H do Compute the loss of the h-th prediction head: Lhj ← − 1 M tj+M−1∑ i=tj log f ( si+1 | b(si, ai; θ), g(τ Pi,K ;φ); θheadh ) end for Find h∗ = argminh∈[H] L h j and update Ltot ← Ltot + Lh ∗ j end for Update θ, φ, {θheadh }Hh=1 ← ∇θ,φ,{θheadh }Hh=1Ltot end for 4 Trajectory-wise multiple choice learning In this section, we propose a trajectory-wise multiple choice learning (T-MCL) that learns a multiheaded dynamics model for dynamics generalization. We first present a trajectory-wise oracle loss for making each prediction head specialize in different environments, and then introduce a contextconditional prediction head to further improve generalization. Finally, we propose an adaptive planning method that generates actions by planning under the most accurate prediction head over a recent experience for planning. 4.1 Trajectory-wise oracle loss for multi-headed dynamics model To approximate the multi-modal distribution of transition dynamics, we introduce a multi-headed dynamics model {f (st+1|b(st, at; θ); θheadh )}Hh=1 that consists of a backbone network b parameterized by θ and H prediction heads parameterized by {θheadh }Hh=1 (see Figure 2a). To make each prediction head specialize in different environments, we propose a trajectory-wise oracle defined as follows: LT-MCL = Eτ Ft,M∼B [ min h∈[H] − 1 M t+M−1∑ i=t log f ( si+1 | b(si, ai; θ); θheadh )] , (1) where [H] is the set {1, · · · , H}, τ Ft,M = (st, at, · · · , st+M−1, at+M−1, st+M ) denotes a trajectory segment of size M , and B = {τ Ft,M} is the training dataset. The proposed loss is designed to only update the most accurate prediction head over each trajectory segment for specialization (see Figure 2b). By considering the accumulated prediction error over trajectory segments, the proposed oracle loss can assign trajectories from different transition dynamics to different transition heads more distinctively (see Figure 4 for supporting experimental results). Namely, our method clusters environments in an unsupervised manner. We also remark that the shared backbone network learns common features across all environments, which provides several advantages, such as improving sample-efficiency and reducing computational costs. 4.2 Context-conditional multi-headed dynamics model To further enable the dynamics model to perform online adaptation to unseen environments, we introduce a context encoder g parameterized by φ, which produces a latent vector g ( τ Pt,K ;φ ) givenK past transitions (st−K , at−K , · · · , st−1, at−1). This context encoder operates under the assumption that the true context of the underlying MDP can be captured from recent experiences [20, 30, 34, 48]. Using this context encoder, we propose to learn a context-conditional multi-headed dynamics model optimized by minimizing the following oracle loss: LT-MCLcontext = E(τ Ft,M ,τ Pt,K)∼B [ min h∈[H] − 1 M t+M−1∑ i=t log f ( si+1 | b(si, ai; θ), g(τ Pi,K ;φ); θheadh )] . (2) We remark that the dynamics generalization of T-MCL can be enhanced by incorporating the contextual information into the dynamics model for enabling its online adaptation. To extract more meaningful contextual information, we also utilize various auxiliary prediction losses proposed in Lee et al. [20] (see the supplementary material for more details). 4.3 Adaptive planning Once a multi-headed dynamics model is learned, it can be used for selecting actions by planning. Since the performance of planning depends on the quality of predictions, it is important to select the prediction head specialized in the current environment for planning. To this end, following the idea of Narendra & Balakrishnan [32], we propose an adaptive planning method that selects the most accurate prediction head over a recent experience (see Figure 2c). Formally, given N past transitions, we select the prediction head h∗ as follows: argmin h∈[H] t−2∑ i=t−N ` ( si+1, f ( b(si, ai), g(τ P i,K ;φ); θ head h )) , (3) where ` is the mean squared error function. One can note that this planning method corresponds to finding the nearest cluster to the current environment. We empirically show that this adaptive planning significantly improves the performance by selecting the prediction head specialized in a specific environment (see Figure 6b for supporting experimental results). 5 Experiments In this section, we designed our experiments to answer the following questions: • How does our method compare to existing model-based RL methods and state-of-the-art modelfree meta-RL method (see Figure 3)? • Can prediction heads be specialized for a certain subset of training environments with similar dynamics (see Figure 4 and Figure 5)? • Is the multi-headed architecture useful for dynamics generalization of other model-based RL methods (see Figure 6a)? • Does adaptive planning improve generalization performance (see Figure 6b)? • Can T-MCL extract meaningful contextual information from complex environments (see Figure 6c and Figure 6d)? 5.1 Setups Environments. We demonstrate the effectiveness of our proposed method on classic control problems (i.e., CartPoleSwingUp and Pendulum) from OpenAI Gym [3] and simulated robotic continuous tasks (i.e., Hopper, SlimHumanoid, HalfCheetah, and CrippledAnt) from MuJoCo physics engine [44]. To evaluate the generalization performance, we designed environments to follow a multi-modal distribution by changing the environment parameters (e.g., length and mass) similar to Packer et al. [33] and Zhou et al. [48]. We use the two predefined discrete set of environment parameters for training and test environments, where parameters for test environments are outside the range of training parameters. Then, we learn a dynamics model on environments whose transition dynamics are characterized by the environment parameters randomly sampled before the episode starts. For evaluation, we report the performance of a trained dynamics model on unseen environments whose environment parameters are randomly sampled from the test parameter set. Similar to prior works [4, 30, 46], we assume that the reward function of environments is known, i.e., ground-truth rewards at the predicted states are available for planning. For all our experiments, we report the mean and standard deviation across three runs. We provide more details in the supplementary material. Planning. We use a model predictive control (MPC) [27] to select actions based on the learned dynamics model. Specifically, we use the cross entropy method (CEM) [6] to optimize action sequences by iteratively re-sampling action sequences near the best performing action sequences from the last iteration. Implementation details of T-MCL. For all experiments, we use an ensemble of multi-headed dynamics models that are independently optimized with the trajectory-wise oracle loss. We reduce modeling errors by training multiple dynamics models [4]. To construct a trajectory segment τ Ft,M in (1), we use M transitions randomly sampled from a trajectory instead of consecutive transitions (st, at, · · · , st+M ). We empirically found that this stabilizes the training, by breaking the temporal correlations of the training data. We also remark that the same hyperparameters are used for all experiments except Pendulum which has a short task horizon. We provide more details in the supplementary material. Baselines. To evaluate the performance of our method, we consider following model-based and model-free RL methods: • Probabilistic ensemble dynamics model (PETS) [4]: an ensemble of probabilistic dynamics models that captures the uncertainty in modeling and planning. PETS employs ensembles of single-headed dynamics models optimized to cover all training environments, while T-MCL employs ensembles of multi-headed dynamics models specialized to a subset of environments. 95.3 4.7 0.0 0.0 100.0 0.0 0.0 0.2 99.8 0.0 0.0 100.0 0.50 1.50 2.50 Head 1 Head 2 Head 3 0.25 Mass (a) CartPoleSwingUp 100.0 0.0 0.0 0.0 0.0 100.0 10.1 89.9 0.0 0.1 99.9 0.0 0.75 1.0 1.25 Head 1 Head 2 Head 3 0.50 Length (b) Pendulum 100.0 0.0 0.0 73.6 26.4 0.0 2.0 2.1 95.9 0.7 86.9 12.4 0.50 1.50 2.50 Head 1 Head 2 Head 3 0.25 Mass (c) HalfCheetah (a) Multiple choice learning (MCL) (b) Trajectory-wise MCL (T-MCL) (c) Generalization performance 5.2 Comparative evaluation on control tasks Figure 3 shows the generalization performances of our method and baseline methods on unseen environments (see the supplementary material for training curve plots). Our method significantly outperforms all model-based RL baselines in all environments. In particular, T-MCL achieves the average return of 19280.1 on HalfCheetah environments while that of PETS is 2223.9. This result demonstrates that our method is more effective for dynamics generalization, compared to the independent ensemble of dynamics models. On the other hand, model-based meta-RL methods (ReBAL and GrBAL) do not exhibit significant performance gain over PETS, which shows the difficulty of adapting a dynamics model to unseen environments via meta-learning. We also remark that CaDM does not consistently improve over PETS, due to the difficulty in context learning with a discrete number of training environments. We observed that T-MCL sometimes reaches the performance of PEARL or even outperforms it in terms of both sample-efficiency and asymptotic performance. This result demonstrates the effectiveness of our method for dynamics generalization, especially given that PEARL adapts to test environments by collecting trajectories at evaluation time. PETS Multi-Headed PETS Av er ag e R et ur n 0 1000 2000 3000 4000 5000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (a) Multi-head architecture T-MCL T-MCL (Non-adaptive) Av er ag e R et ur n 0 5,000 10,000 15,000 20,000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (b) Adaptive planning PETS T-MCL T-MCL (No Context) Av er ag e R et ur n 0 5,000 10,000 15,000 20,000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (c) Context learning Mass = 0.25 Mass = 0.50 Mass = 1.50 Mass = 2.50 (d) t-SNE visualization Figure 6: (a) Generalization performance of PETS and Multi-Headed PETS on unseen HalfCheetah environments. (b) We compare the generalization performance of adaptive planning to non-adaptive planning on unseen HalfCheetah environments. (c) Generalization performance of trained dynamics models on unseen HalfCheetah environments. One can observe that T-MCL still outperforms PETS without context learning, but this results in a significant performance drop. (d) t-SNE visualization of hidden features of context-conditional multi-headed dynamics model on HalfCheetah environments. 5.3 Analysis Specialization. To investigate the ability of our method to learn specialized prediction heads, we visualize how training trajectories are assigned to each head in Figure 4. One can observe that trajectories are distinctively assigned to prediction heads, while trajectories from environments with similar transition dynamics are assigned to the same prediction head. For example, we discover that the transition dynamics of Pendulum with length 1.0 and 1.25 are more similar to each other than Pendulum with other lengths (see the supplementary material for supporting figures), which implies that our method can cluster environments in an unsupervised manner. Effects of trajectory-wise loss. To further investigate the effectiveness of trajectory-wise oracle loss, we compare our method to MCL, where we consider only a single transition for selecting the model to optimize, i.e., M = 1 in (1). Figure 5a and Figure 5b show that training trajectories are more distinctively assigned to each head when we use T-MCL, which implies that trajectory-wise loss is indeed important for learning specialized prediction heads. Also, as shown in Figure 5c, this leads to superior generalization performance over the dynamics model trained with MCL, showing that learning specialized prediction heads improves the generalization performance. Effects of multi-headed dynamics model. We also analyze the isolated effect of employing multiheaded architecture on the generalization performance. To this end, we train the multi-headed version of PETS, i.e., ensemble of multi-headed dynamics models without trajectory-wise oracle loss, context learning, and adaptive planning. Figure 6a shows that multi-headed PETS does not improve the performance of vanilla PETS on HalfCheetah environments, which demonstrates the importance of training with trajectory-wise oracle loss and adaptively selecting the most accurate prediction head for achieving superior generalization performance of our method. Effects of adaptive planning. We investigate the importance of selecting the specialized prediction head adaptively. Specifically, we compare the performance of employing the proposed adaptive planning method to the performance of employing non-adaptive planning, i.e., planning with the average predictions of prediction heads. As shown in Figure 6b, the gain due to adaptive planning is significant, which confirms that proposed adaptive planning is important. Effects of context learning. We examine our choice of integrating context learning by comparing the performance of a context-conditional multi-headed dynamics model to the performance of a multi-headed dynamics model. As shown in Figure 6c, removing context learning scheme from the T-MCL results in steep performance degradation, which demonstrates the importance of incorporating contextual information. However, we remark that the T-MCL still outperforms PETS without context learning scheme. Also, we visualize the hidden features of a context-conditional multi-headed dynamics model on HalfCheetah environments using t-SNE [26] in Figure 6d. One can observe that features from the environments with different transition dynamics are separated in the embedding space, which implies that our method indeed learns meaningful contextual information. Effects of hyperparameters. Finally, we investigate how hyperparameters affect the performance of T-MCL. Specifically, we consider three hyperparameters, i.e., H ∈ {2, 3, 4, 5, 8} for the number of prediction heads in (2), M ∈ {1, 5, 10, 20, 30} for the horizon of trajectory-wise oracle loss in (2), and N ∈ {1, 5, 10, 20, 30} for the horizon of adaptive planning in (3). Figure 7a shows that H = 3 achieves the best performance because three prediction heads are enough to capture the multi-modality of the training environments in our setting. When H > 3, the performance decreases because trajectories from similar dynamics are split into multiple heads. Figure 7b and Figure 7c show that our method is robust to the horizons M,N , and considering more transitions can further improve the performance. We provide results for all environments in the supplementary material. 6 Conclusion In this work, we present trajectory-wise multiple choice learning, a new model-based RL algorithm that learns a multi-headed dynamics model for dynamics generalization. Our method consists of three key ingredients: (a) trajectory-wise oracle loss for multi-headed dynamics model, (b) contextconditional multi-headed dynamics model, and (c) adaptive planning. We show that our method can capture the multi-modal nature of environments in an unsupervised manner, and outperform existing model-based RL methods. Overall, we believe our approach would further strengthen the understanding of dynamics generalization and could be useful to other relevant topics such as model-based policy optimization methods [15, 16]. Broader Impact While deep reinforcement learning (RL) has been successful in a range of challenging domains, it still suffers from a lack of generalization ability to unexpected changes in surrounding environmental factors [20, 30]. This failure of autonomous agents to generalize across diverse environments is one of the major reasoning behind the objection to real-world deployment of RL agents. To tackle this problem, in this paper, we focus on developing more robust and generalizable RL algorithm, which could improve the applicability of deep RL to various real-world applications, such as robotics manipulation [17] and package delivery [2]. Such advances in the robustness of RL algorithm could contribute to improved productivity of society via the safe and efficient utilization of autonomous agents in a diverse range of industries. Unfortunately, however, we could also foresee the negative long-term consequences of deploying autonomous systems in the real-world. For example, autonomous agents could be abused by specifying harmful objectives such as autonomous weapons. While such malicious usage of autonomous agents was available long before the advent of RL algorithms, developing an RL algorithm for dynamics generalization may accelerate the real-world deployment of such malicious robots, e.g., autonomous drones loaded with explosives, by making them more robust to changing dynamics or defense systems. We would like to recommend the researchers to recognize this potential misuse as we further improve RL systems. Acknowledgments and Disclosure of Funding We thank Junsu Kim, Seunghyun Lee, Jongjin Park, Sihyun Yu, and our anonymous reviewers for feedback and discussions. This research is supported in part by ONR PECASE N000141612723, Tencent, Berkeley Deep Drive, Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)), and Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF2018R1A5A1059921).
1. What is the main contribution of the paper in model-based reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its technical soundness and evaluation? 3. What are the weaknesses of the paper regarding its hyperparameter selection and exploration? 4. How does the reviewer assess the significance and impact of the proposed improvement in the context of related work? 5. Are there any additional suggestions or requests for further investigation from the reviewer?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Paper proposes to improve the existing model-based RL methods with ensemble-based/multi-head dynamic model (e.g. PETS, PlaNet) from several angles, including a trajectory-wise loss, an auxiliary context learning task and an adaptive planning heuristic. The proposed improvement is evaluated on some transferring tasks, where the dynamics slightly changes across training and testing envs, and demonstrate some advantages over multiple baselines. Strengths + The paper is overall well-written and easy to follow. The authors make their major contribution pretty clear. + The proposed improvement has limited novelty but is technically sound. + Related work and counterparts are discussed and compared in a reasonable way. The technical contributions are evaluated extensively. The results are convincing. Weaknesses Overall I find this submission makes no major mistakes and perhaps should be considered as a borderline one, given their contributions are more like some engineering endeavor but are evaluated properly, if there is no other reviewer points out some significant issues. I only have a few comments on this paper at this point: - If my understanding is correct, I feels the horizon of traj-wise loss (M) adaptive planning (N) and number of heads (H) are major hyper-parameters in the proposed improvement, but I can't find any illustrations on how can these three be selected during their evaluations. The authors are expected to provide a clarification on how the parameter search is conducted. A comprehensive ablations on how do these parameters affect the results is also expected. - Since the proposed improvement is agnostic to the concrete MBRL algorithm, experiments with different planning backbone should be included, e.g. Dreamer (https://openreview.net/forum?id=S1lOTC4tDS), MBPO(https://arxiv.org/abs/1906.08253), etc.
NIPS
Title Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning Abstract Model-based reinforcement learning (RL) has shown great potential in various control tasks in terms of both sample-efficiency and final performance. However, learning a generalizable dynamics model robust to changes in dynamics remains a challenge since the target transition dynamics follow a multi-modal distribution. In this paper, we present a new model-based RL algorithm, coined trajectory-wise multiple choice learning, that learns a multi-headed dynamics model for dynamics generalization. The main idea is updating the most accurate prediction head to specialize each head in certain environments with similar dynamics, i.e., clustering environments. Moreover, we incorporate context learning, which encodes dynamicsspecific information from past experiences into the context latent vector, enabling the model to perform online adaptation to unseen environments. Finally, to utilize the specialized prediction heads more effectively, we propose an adaptive planning method, which selects the most accurate prediction head over a recent experience. Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods. Source code and videos are available at https://sites.google.com/view/trajectory-mcl. 1 Introduction Deep reinforcement learning (RL) has exhibited wide success in solving sequential decision-making problems [23, 39, 45]. Early successful deep RL approaches had been mostly model-free, which do not require an explicit model of the environment, but instead directly learn a policy [25, 28, 38]. However, despite the strong asymptotic performance, the applications of model-free RL have largely been limited to simulated domains due to its high sample complexity. For this reason, model-based RL has been gaining considerable attention as a sample-efficient alternative, with an eye towards robotics and other physics domains. The increased sample-efficiency of model-based RL algorithms is obtained by exploiting the structure of the problem: first the agent learns a predictive model of the environment, and then plans ahead with the learned model [1, 37, 42]. Recently, substantial progress has been made on the sample-efficiency of model-based RL algorithms [5, 7, 22, 23, 24]. However, it has been evidenced that model-based RL algorithms are not robust to changes in the dynamics [20, 30], i.e., dynamics models fail to provide accurate predictions as the transition dynamics of environments change. This makes model-based RL algorithms unreliable to be deployed into real-world environments where partially unspecified dynamics are common; for instance, a deployed robot might not know a priori various features of the terrain it has to navigate. ∗Equal Contribution. Correspondence to {[email protected], [email protected]} 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. ar X iv :2 01 0. 13 30 3v 1 [ cs .L G ] 2 6 O ct 2 As a motivating example, we visualize the next states obtained by crippling one of the legs of an ant robot (see Figure 1a). Figure 1b shows that the target transition dynamics follow a multimodal distribution, where each mode corresponds to each leg of a robot, even though the original environment has deterministic transition dynamics. This implies that a model-based RL algorithm that can approximate the multi-modal distribution is required to develop a reliable and robust agent against changes in the dynamics. Several algorithms have been proposed to tackle this problem, e.g., learning contextual information to capture local dynamics [20], fine-tuning model parameters for fast adaptation [30]. These algorithms, however, are limited in that they do not explicitly learn dynamics models that can approximate the multi-modal distribution of transition dynamics. Contribution. In this paper, we present a new model-based RL algorithm, coined trajectory-wise multiple choice learning (T-MCL), that can approximate the multi-modal distribution of transition dynamics in an unsupervised manner. To this end, we introduce a novel loss function, trajectory-wise oracle loss, for learning a multi-headed dynamics model where each prediction head specializes in different environments (see Figure 2a). By updating the most accurate prediction head over a trajectory segment (see Figure 2b), we discover that specialized prediction heads emerge automatically. Namely, our method can effectively cluster environments without any prior knowledge of environments. To further enable the model to perform online adaptation to unseen environments, we also incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector and provides it as an additional input to prediction heads (see Figure 2a). Finally, to utilize the specialized prediction heads more effectively, we propose adaptive planning that selects actions using the most accurate prediction head over a recent experience, which can be interpreted as finding the nearest cluster to the current environment (see Figure 2c). We demonstrate the effectiveness of T-MCL on various control tasks from OpenAI Gym [3]. For evaluation, we measure the generalization performance of model-based RL agents on unseen (yet related) environments with different transition dynamics. In our experiments, T-MCL exhibits superior generalization performance compared to existing model-based RL methods [4, 20, 30]. For example, compared to CaDM [20], a state-of-the-art model-based RL method for dynamics generalization, our method obtains 3.5x higher average return on the CrippledAnt environment. 2 Related work Model-based reinforcement learning. By learning a forward dynamics model that approximates the transition dynamics of environments, model-based RL attains a superior sample-efficiency. Such a learned dynamics model can be used as a simulator for model-free RL methods [16, 18, 40], providing a prior or additional features to a policy [9, 47], or planning ahead to select actions by predicting the future consequences of actions [1, 22, 42]. A major challenge in model-based RL is to learn accurate dynamics models that can provide correct future predictions. To this end, numerous methods thus have been proposed, including ensembles [4] and latent dynamics models [14, 15, 37]. While these methods have made significant progress even in complex domains [15, 37], dynamics models still struggle to provide accurate predictions on unseen environments [20, 30]. <latexit sha1_base64="JUwT8/gTyru9SrG2drc6lK20z7s=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74STu7nfeeLaiEQ94jTlQUxHSkSCUbSSTwc5zgbVmlt3FyDrxCtIDQq0BtWv/jBhWcwVMkmN6XluikFONQom+azSzwxPKZvQEe9ZqmjMTZAvjp2RC6sMSZRoWwrJQv09kdPYmGkc2s6Y4tisenPxP6+XYXQT5EKlGXLFlouiTBJMyPxzMhSaM5RTSyjTwt5K2JhqytDmU7EheKsvr5P2Vd1r1BsPjVrztoijDGdwDpfgwTU04R5a4AMDAc/wCm+Ocl6cd+dj2VpyiplT+APn8wcVMo7f</latexit> <latexit sha1_base64="CN/x1zI9SF064ojut25WYcKBezg=">AAAB+HicbVDLSsNAFL3xWeujUZduBovgqiRSsMuCGxcuKtgHtCFMppN26GQSZiZCDfkSNy4UceunuPNvnLRZaOuBgcM593LPnCDhTGnH+bY2Nre2d3Yre9X9g8Ojmn180lNxKgntkpjHchBgRTkTtKuZ5nSQSIqjgNN+MLsp/P4jlYrF4kHPE+pFeCJYyAjWRvLt2ijCekowz+5yP3Nz3647DWcBtE7cktShRMe3v0bjmKQRFZpwrNTQdRLtZVhqRjjNq6NU0QSTGZ7QoaECR1R52SJ4ji6MMkZhLM0TGi3U3xsZjpSaR4GZLGKqVa8Q//OGqQ5bXsZEkmoqyPJQmHKkY1S0gMZMUqL53BBMJDNZEZliiYk2XVVNCe7ql9dJ76rhNhvN+2a93SrrqMAZnMMluHANbbiFDnSBQArP8Apv1pP1Yr1bH8vRDavcOYU/sD5/APJjk0A=</latexit> <latexit sha1_base64="eJteVk3jzTJdLreT95xXMrghIyU=">AAACIXicdVBLS8NAGNzUV62vqEcvi0XwICWJKdZb0YsHDxXsA9pQNttNu3TzYHcjlJC/4sW/4sWDIr2Jf8ZN2oCKDiwMM/PtfjtuxKiQhvGhlVZW19Y3ypuVre2d3T19/6Ajwphj0sYhC3nPRYIwGpC2pJKRXsQJ8l1Guu70OvO7D4QLGgb3chYRx0fjgHoUI6mkod5IBvklfT52ncSo1Rt1q26d5eTcthfEMi/TgY/kBCOW3KbDxErToV4t0rBIwyINzZqRowqWaA31+WAU4tgngcQMCdE3jUg6CeKSYkbSyiAWJEJ4isakr2iAfCKcJF8uhSdKGUEv5OoEEubq94kE+ULMfFclsz3Fby8T//L6sfQaTkKDKJYkwIuHvJhBGcKsLjiinGDJZoogzKnaFeIJ4ghLVWpFlVD8FP5POlbNtGv2nV1tXi3rKIMjcAxOgQkuQBPcgBZoAwwewTN4BW/ak/aivWvzRbSkLWcOwQ9on1+SJ6DB</latexit> <latexit sha1_base64="a7txUJr+FjoRJKxtux2ljCScSjE=">AAACIXicdVBLS8NAGNzUV62vqkcvi0XwICFpK423ghcPHirYB6ShbLabdunmwe5GKCF/xYt/xYsHRXoT/4ybtAEVHVgYZubb/XbciFEhDeNDK62tb2xulbcrO7t7+wfVw6OeCGOOSReHLOQDFwnCaEC6kkpGBhEnyHcZ6buz68zvPxAuaBjcy3lEHB9NAupRjKSSRlUrGeaX2HziOomhX5r1VvPqIidWw1iSVt1Ihz6SU4xYcpuOkkaajqq1Ig2LNCzS0NSNHDWwQmdUXQzHIY59EkjMkBC2aUTSSRCXFDOSVoaxIBHCMzQhtqIB8olwkny5FJ4pZQy9kKsTSJir3ycS5Asx912VzPYUv71M/MuzY+lZTkKDKJYkwMuHvJhBGcKsLjimnGDJ5oogzKnaFeIp4ghLVWpFlVD8FP5PenXdbOrNu2atba3qKIMTcArOgQlaoA1uQAd0AQaP4Bm8gjftSXvR3rXFMlrSVjPH4Ae0zy9Nz6CM</latexit> <latexit sha1_base64="CN/x1zI9SF064ojut25WYcKBezg=">AAAB+HicbVDLSsNAFL3xWeujUZduBovgqiRSsMuCGxcuKtgHtCFMppN26GQSZiZCDfkSNy4UceunuPNvnLRZaOuBgcM593LPnCDhTGnH+bY2Nre2d3Yre9X9g8Ojmn180lNxKgntkpjHchBgRTkTtKuZ5nSQSIqjgNN+MLsp/P4jlYrF4kHPE+pFeCJYyAjWRvLt2ijCekowz+5yP3Nz3647DWcBtE7cktShRMe3v0bjmKQRFZpwrNTQdRLtZVhqRjjNq6NU0QSTGZ7QoaECR1R52SJ4ji6MMkZhLM0TGi3U3xsZjpSaR4GZLGKqVa8Q//OGqQ5bXsZEkmoqyPJQmHKkY1S0gMZMUqL53BBMJDNZEZliiYk2XVVNCe7ql9dJ76rhNhvN+2a93SrrqMAZnMMluHANbbiFDnSBQArP8Apv1pP1Yr1bH8vRDavcOYU/sD5/APJjk0A=</latexit> <latexit sha1_base64="LbZTPyHPkoBlIZO+cqz4WbvTbMg=">AAAB+HicbVDLSsNAFL2pr1ofrbp0M1gEVyUpBV0W3bhwUcE+oA1hMp20QyeTMDMRasiXuHGhiFs/xZ1/46TNQlsPDBzOuZd75vgxZ0rb9rdV2tjc2t4p71b29g8Oq7Wj456KEklol0Q8kgMfK8qZoF3NNKeDWFIc+pz2/dlN7vcfqVQsEg96HlM3xBPBAkawNpJXq45CrKcE8/Qu89Jm5tXqdsNeAK0TpyB1KNDxal+jcUSSkApNOFZq6NixdlMsNSOcZpVRomiMyQxP6NBQgUOq3HQRPEPnRhmjIJLmCY0W6u+NFIdKzUPfTOYx1aqXi/95w0QHV27KRJxoKsjyUJBwpCOUt4DGTFKi+dwQTCQzWRGZYomJNl1VTAnO6pfXSa/ZcFqN1n2r3r4u6ijDKZzBBThwCW24hQ50gUACz/AKb9aT9WK9Wx/L0ZJV7JzAH1ifP/bqk0s=</latexit> <latexit sha1_base64="+D8CGvOIy7CmYrNk/7o4p++HCN4=">AAAB+HicbVDLSsNAFL2pr1ofrbp0M1gEVyXRgl0W3LhwUcE+oA1hMp20QyeTMDMRasiXuHGhiFs/xZ1/46TNQlsPDBzOuZd75vgxZ0rb9rdV2tjc2t4p71b29g8Oq7Wj456KEklol0Q8kgMfK8qZoF3NNKeDWFIc+pz2/dlN7vcfqVQsEg96HlM3xBPBAkawNpJXq45CrKcE8/Qu89KrzKvV7Ya9AFonTkHqUKDj1b5G44gkIRWacKzU0LFj7aZYakY4zSqjRNEYkxme0KGhAodUuekieIbOjTJGQSTNExot1N8bKQ6Vmoe+mcxjqlUvF//zhokOWm7KRJxoKsjyUJBwpCOUt4DGTFKi+dwQTCQzWRGZYomJNl1VTAnO6pfXSe+y4TQbzftmvd0q6ijDKZzBBThwDW24hQ50gUACz/AKb9aT9WK9Wx/L0ZJV7JzAH1ifP/Vtk0I=</latexit> <latexit sha1_base64="JUwT8/gTyru9SrG2drc6lK20z7s=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoMeiF48VTFtoQ9lsN+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJXCoOt+O6WNza3tnfJuZW//4PCoenzSNkmmGfdZIhPdDanhUijuo0DJu6nmNA4l74STu7nfeeLaiEQ94jTlQUxHSkSCUbSSTwc5zgbVmlt3FyDrxCtIDQq0BtWv/jBhWcwVMkmN6XluikFONQom+azSzwxPKZvQEe9ZqmjMTZAvjp2RC6sMSZRoWwrJQv09kdPYmGkc2s6Y4tisenPxP6+XYXQT5EKlGXLFlouiTBJMyPxzMhSaM5RTSyjTwt5K2JhqytDmU7EheKsvr5P2Vd1r1BsPjVrztoijDGdwDpfgwTU04R5a4AMDAc/wCm+Ocl6cd+dj2VpyiplT+APn8wcVMo7f</latexit> <latexit sha1_base64="ImEBLyjR9ZUOer4a7zC7GMRMeS4=">AAAB83icdVDLSgMxFM3UV62vqks3wSK4GmZ0Suuu4MZlBfuAzlAyaaYNzSQhyQhl6G+4caGIW3/GnX9jOq2gogcuHM65l3vviSWj2njeh1NaW9/Y3CpvV3Z29/YPqodHXS0yhUkHCyZUP0aaMMpJx1DDSF8qgtKYkV48vV74vXuiNBX8zswkiVI05jShGBkrhSGRmjLBh7k/H1Zrnlv3gsvGFSxIUG8uyWXdg77rFaiBFdrD6ns4EjhLCTeYIa0HvidNlCNlKGZkXgkzTSTCUzQmA0s5SomO8uLmOTyzyggmQtniBhbq94kcpVrP0th2pshM9G9vIf7lDTKTNKOccpkZwvFyUZIxaARcBABHVBFs2MwShBW1t0I8QQphY2Oq2BC+PoX/k+6F6wducBvUWs1VHGVwAk7BOfBBA7TADWiDDsBAggfwBJ6dzHl0XpzXZWvJWc0cgx9w3j4BqNmSFA==</latexit> <latexit sha1_base64="LUQ4QWpCuaFVM1SHCTj+OB02KXI=">AAAB83icdVDLSgMxFM3UV62vqks3wSK4GqbtlNZdwY3LCvYBnaFk0kwbmklCkhHK0N9w40IRt/6MO//GdFpBRQ9cOJxzL/feE0lGtfG8D6ewsbm1vVPcLe3tHxwelY9PelqkCpMuFkyoQYQ0YZSTrqGGkYFUBCURI/1odr30+/dEaSr4nZlLEiZowmlMMTJWCgIiNWWCj7LaYlSueG7D8+vNK5gTv9FakXrDg1XXy1EBa3RG5fdgLHCaEG4wQ1oPq540YYaUoZiRRSlINZEIz9CEDC3lKCE6zPKbF/DCKmMYC2WLG5ir3ycylGg9TyLbmSAz1b+9pfiXN0xN3AozymVqCMerRXHKoBFwGQAcU0WwYXNLEFbU3grxFCmEjY2pZEP4+hT+T3o1t+q7/q1fabfWcRTBGTgHl6AKmqANbkAHdAEGEjyAJ/DspM6j8+K8rloLznrmFPyA8/YJql6SFQ==</latexit> <latexit sha1_base64="uMxm3PbilOYL+k0Jk7kl6JXG+Os=">AAAB83icdVDLSgMxFM34rPVVdekmWARXw4yd0roruHFZwT6gM5RMmmlDM0lIMkIZ+htuXCji1p9x59+YTiuo6IELh3Pu5d57YsmoNp734aytb2xubZd2yrt7+weHlaPjrhaZwqSDBROqHyNNGOWkY6hhpC8VQWnMSC+eXi/83j1Rmgp+Z2aSRCkac5pQjIyVwpBITZngw7w2H1aqnlv3glrjChYkqDeXpFb3oO96Bapghfaw8h6OBM5Swg1mSOuB70kT5UgZihmZl8NME4nwFI3JwFKOUqKjvLh5Ds+tMoKJULa4gYX6fSJHqdazNLadKTIT/dtbiH95g8wkzSinXGaGcLxclGQMGgEXAcARVQQbNrMEYUXtrRBPkELY2JjKNoSvT+H/pHvp+oEb3AbVVnMVRwmcgjNwAXzQAC1wA9qgAzCQ4AE8gWcncx6dF+d12brmrGZOwA84b5+r45IW</latexit> Multiple choice learning. Multiple choice learning (MCL) [12, 13] is an ensemble method where the objective is to minimize an oracle loss, making at least one ensemble member predict the correct answer. By making the most accurate model optimize the loss, MCL encourages the model to produce multiple outputs of high quality. Even though several optimization algorithms [8, 12] have been proposed for MCL objectives, it is challenging to employ these for learning deep neural networks due to training complexity. To address this problem, Lee et al. [21] proposed a stochastic gradient descent based algorithm. Recently, several methods [19, 29, 43] have been proposed to further improve MCL by tackling the issue of overconfidence problem of neural networks. While most prior works on MCL have focused on supervised learning, our method applies the MCL algorithm to model-based RL. Dynamics generalization and adaptation in model-based RL. Prior dynamics generalization methods have aimed to either encode inductive biases into the architecture [36] or to learn contextual information that captures the local dynamics [20]. Notably, Lee et al. [20] introduced a context encoder that captures dynamics-specific information of environments, and improved the generalization ability by providing a context latent vector as additional inputs. Our method further improves this method by combining multiple choice learning and context learning. For dynamics adaptation, several meta-learning based methods have been studied [30, 31, 35]. Recently, Nagabandi et al. [30] proposed a model-based meta-RL method that adapts to recent experiences either by updating model parameters via a small number of gradient updates [11] or by updating hidden representations of a recurrent model [10]. Our method differs from this method, in that we do not fine-tune the model parameters to adapt to new environments at evaluation time. 3 Problem setup We consider the standard RL framework where an agent optimizes a specified reward function through interacting with an environment. Formally, we formulate our problem as a discrete-time Markov decision process (MDP) [41], which is defined as a tuple (S,A, p, r, γ, ρ0). Here, S is the state space, A is the action space, p (s′|s, a) is the transition dynamics, r (s, a) is the reward function, ρ0 is the initial state distribution, and γ ∈ [0, 1) is the discount factor. The goal of RL is to obtain a policy, mapping from states to actions, that maximizes the expected return defined as the total accumulated reward. We tackle this problem in the context of model-based RL by learning a forward dynamics model f , which approximates the transition dynamics p (s′|s, a). Then, dynamics model f is used to provide training data for a policy or predict the future consequences of actions for planning. In order to address the problem of generalization, we further consider the distribution of MDPs, where the transition dynamics pc (s′|s, a) varies according to a context c. For instance, a robot agent’s transition dynamics may change when some of its parts malfunction due to unexpected damages. Our goal is to learn a generalizable forward dynamics model that is robust to such dynamics changes, i.e., approximating the multi-modal distribution of transition dynamics p (s′|s, a) = ∫ c p (c) pc (s ′|s, a). Specifically, given a set of training environments with contexts sampled from ptrain(c), we aim to learn a forward dynamics model that can produce accurate predictions for both training environments and test environments with unseen (but related) contexts sampled from ptest(c). Algorithm 1 Trajectory-wise MCL (T-MCL). Initialize parameters of backbone network θ, prediction heads {θheadh }Hh=1, context encoder φ. Initialize dataset B ← ∅. for each iteration do Sample c ∼ pseen (c). // ENVIRONMENT INTERACTION for t = 1 to TaskHorizon do Get context latent vector zt = g ( τ Pt,K ;φ ) and select the best prediction head h∗ from (3) Collect {(st, at, st+1, rt, τ Pt,K)} from environment with transition dynamics pc using h∗. Update B ← B ∪ {(st, at, st+1, rt, τ Pt,K)}. end for Initialize Ltot ← 0 // DYNAMICS AND CONTEXT LEARNING Sample {τPtj ,K , τFtj ,M}Bj=1 ∼ B for j = 1 to B do for h = 1 to H do Compute the loss of the h-th prediction head: Lhj ← − 1 M tj+M−1∑ i=tj log f ( si+1 | b(si, ai; θ), g(τ Pi,K ;φ); θheadh ) end for Find h∗ = argminh∈[H] L h j and update Ltot ← Ltot + Lh ∗ j end for Update θ, φ, {θheadh }Hh=1 ← ∇θ,φ,{θheadh }Hh=1Ltot end for 4 Trajectory-wise multiple choice learning In this section, we propose a trajectory-wise multiple choice learning (T-MCL) that learns a multiheaded dynamics model for dynamics generalization. We first present a trajectory-wise oracle loss for making each prediction head specialize in different environments, and then introduce a contextconditional prediction head to further improve generalization. Finally, we propose an adaptive planning method that generates actions by planning under the most accurate prediction head over a recent experience for planning. 4.1 Trajectory-wise oracle loss for multi-headed dynamics model To approximate the multi-modal distribution of transition dynamics, we introduce a multi-headed dynamics model {f (st+1|b(st, at; θ); θheadh )}Hh=1 that consists of a backbone network b parameterized by θ and H prediction heads parameterized by {θheadh }Hh=1 (see Figure 2a). To make each prediction head specialize in different environments, we propose a trajectory-wise oracle defined as follows: LT-MCL = Eτ Ft,M∼B [ min h∈[H] − 1 M t+M−1∑ i=t log f ( si+1 | b(si, ai; θ); θheadh )] , (1) where [H] is the set {1, · · · , H}, τ Ft,M = (st, at, · · · , st+M−1, at+M−1, st+M ) denotes a trajectory segment of size M , and B = {τ Ft,M} is the training dataset. The proposed loss is designed to only update the most accurate prediction head over each trajectory segment for specialization (see Figure 2b). By considering the accumulated prediction error over trajectory segments, the proposed oracle loss can assign trajectories from different transition dynamics to different transition heads more distinctively (see Figure 4 for supporting experimental results). Namely, our method clusters environments in an unsupervised manner. We also remark that the shared backbone network learns common features across all environments, which provides several advantages, such as improving sample-efficiency and reducing computational costs. 4.2 Context-conditional multi-headed dynamics model To further enable the dynamics model to perform online adaptation to unseen environments, we introduce a context encoder g parameterized by φ, which produces a latent vector g ( τ Pt,K ;φ ) givenK past transitions (st−K , at−K , · · · , st−1, at−1). This context encoder operates under the assumption that the true context of the underlying MDP can be captured from recent experiences [20, 30, 34, 48]. Using this context encoder, we propose to learn a context-conditional multi-headed dynamics model optimized by minimizing the following oracle loss: LT-MCLcontext = E(τ Ft,M ,τ Pt,K)∼B [ min h∈[H] − 1 M t+M−1∑ i=t log f ( si+1 | b(si, ai; θ), g(τ Pi,K ;φ); θheadh )] . (2) We remark that the dynamics generalization of T-MCL can be enhanced by incorporating the contextual information into the dynamics model for enabling its online adaptation. To extract more meaningful contextual information, we also utilize various auxiliary prediction losses proposed in Lee et al. [20] (see the supplementary material for more details). 4.3 Adaptive planning Once a multi-headed dynamics model is learned, it can be used for selecting actions by planning. Since the performance of planning depends on the quality of predictions, it is important to select the prediction head specialized in the current environment for planning. To this end, following the idea of Narendra & Balakrishnan [32], we propose an adaptive planning method that selects the most accurate prediction head over a recent experience (see Figure 2c). Formally, given N past transitions, we select the prediction head h∗ as follows: argmin h∈[H] t−2∑ i=t−N ` ( si+1, f ( b(si, ai), g(τ P i,K ;φ); θ head h )) , (3) where ` is the mean squared error function. One can note that this planning method corresponds to finding the nearest cluster to the current environment. We empirically show that this adaptive planning significantly improves the performance by selecting the prediction head specialized in a specific environment (see Figure 6b for supporting experimental results). 5 Experiments In this section, we designed our experiments to answer the following questions: • How does our method compare to existing model-based RL methods and state-of-the-art modelfree meta-RL method (see Figure 3)? • Can prediction heads be specialized for a certain subset of training environments with similar dynamics (see Figure 4 and Figure 5)? • Is the multi-headed architecture useful for dynamics generalization of other model-based RL methods (see Figure 6a)? • Does adaptive planning improve generalization performance (see Figure 6b)? • Can T-MCL extract meaningful contextual information from complex environments (see Figure 6c and Figure 6d)? 5.1 Setups Environments. We demonstrate the effectiveness of our proposed method on classic control problems (i.e., CartPoleSwingUp and Pendulum) from OpenAI Gym [3] and simulated robotic continuous tasks (i.e., Hopper, SlimHumanoid, HalfCheetah, and CrippledAnt) from MuJoCo physics engine [44]. To evaluate the generalization performance, we designed environments to follow a multi-modal distribution by changing the environment parameters (e.g., length and mass) similar to Packer et al. [33] and Zhou et al. [48]. We use the two predefined discrete set of environment parameters for training and test environments, where parameters for test environments are outside the range of training parameters. Then, we learn a dynamics model on environments whose transition dynamics are characterized by the environment parameters randomly sampled before the episode starts. For evaluation, we report the performance of a trained dynamics model on unseen environments whose environment parameters are randomly sampled from the test parameter set. Similar to prior works [4, 30, 46], we assume that the reward function of environments is known, i.e., ground-truth rewards at the predicted states are available for planning. For all our experiments, we report the mean and standard deviation across three runs. We provide more details in the supplementary material. Planning. We use a model predictive control (MPC) [27] to select actions based on the learned dynamics model. Specifically, we use the cross entropy method (CEM) [6] to optimize action sequences by iteratively re-sampling action sequences near the best performing action sequences from the last iteration. Implementation details of T-MCL. For all experiments, we use an ensemble of multi-headed dynamics models that are independently optimized with the trajectory-wise oracle loss. We reduce modeling errors by training multiple dynamics models [4]. To construct a trajectory segment τ Ft,M in (1), we use M transitions randomly sampled from a trajectory instead of consecutive transitions (st, at, · · · , st+M ). We empirically found that this stabilizes the training, by breaking the temporal correlations of the training data. We also remark that the same hyperparameters are used for all experiments except Pendulum which has a short task horizon. We provide more details in the supplementary material. Baselines. To evaluate the performance of our method, we consider following model-based and model-free RL methods: • Probabilistic ensemble dynamics model (PETS) [4]: an ensemble of probabilistic dynamics models that captures the uncertainty in modeling and planning. PETS employs ensembles of single-headed dynamics models optimized to cover all training environments, while T-MCL employs ensembles of multi-headed dynamics models specialized to a subset of environments. 95.3 4.7 0.0 0.0 100.0 0.0 0.0 0.2 99.8 0.0 0.0 100.0 0.50 1.50 2.50 Head 1 Head 2 Head 3 0.25 Mass (a) CartPoleSwingUp 100.0 0.0 0.0 0.0 0.0 100.0 10.1 89.9 0.0 0.1 99.9 0.0 0.75 1.0 1.25 Head 1 Head 2 Head 3 0.50 Length (b) Pendulum 100.0 0.0 0.0 73.6 26.4 0.0 2.0 2.1 95.9 0.7 86.9 12.4 0.50 1.50 2.50 Head 1 Head 2 Head 3 0.25 Mass (c) HalfCheetah (a) Multiple choice learning (MCL) (b) Trajectory-wise MCL (T-MCL) (c) Generalization performance 5.2 Comparative evaluation on control tasks Figure 3 shows the generalization performances of our method and baseline methods on unseen environments (see the supplementary material for training curve plots). Our method significantly outperforms all model-based RL baselines in all environments. In particular, T-MCL achieves the average return of 19280.1 on HalfCheetah environments while that of PETS is 2223.9. This result demonstrates that our method is more effective for dynamics generalization, compared to the independent ensemble of dynamics models. On the other hand, model-based meta-RL methods (ReBAL and GrBAL) do not exhibit significant performance gain over PETS, which shows the difficulty of adapting a dynamics model to unseen environments via meta-learning. We also remark that CaDM does not consistently improve over PETS, due to the difficulty in context learning with a discrete number of training environments. We observed that T-MCL sometimes reaches the performance of PEARL or even outperforms it in terms of both sample-efficiency and asymptotic performance. This result demonstrates the effectiveness of our method for dynamics generalization, especially given that PEARL adapts to test environments by collecting trajectories at evaluation time. PETS Multi-Headed PETS Av er ag e R et ur n 0 1000 2000 3000 4000 5000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (a) Multi-head architecture T-MCL T-MCL (Non-adaptive) Av er ag e R et ur n 0 5,000 10,000 15,000 20,000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (b) Adaptive planning PETS T-MCL T-MCL (No Context) Av er ag e R et ur n 0 5,000 10,000 15,000 20,000 Timesteps 0 2×104 4×104 6×104 8×104 10×104 (c) Context learning Mass = 0.25 Mass = 0.50 Mass = 1.50 Mass = 2.50 (d) t-SNE visualization Figure 6: (a) Generalization performance of PETS and Multi-Headed PETS on unseen HalfCheetah environments. (b) We compare the generalization performance of adaptive planning to non-adaptive planning on unseen HalfCheetah environments. (c) Generalization performance of trained dynamics models on unseen HalfCheetah environments. One can observe that T-MCL still outperforms PETS without context learning, but this results in a significant performance drop. (d) t-SNE visualization of hidden features of context-conditional multi-headed dynamics model on HalfCheetah environments. 5.3 Analysis Specialization. To investigate the ability of our method to learn specialized prediction heads, we visualize how training trajectories are assigned to each head in Figure 4. One can observe that trajectories are distinctively assigned to prediction heads, while trajectories from environments with similar transition dynamics are assigned to the same prediction head. For example, we discover that the transition dynamics of Pendulum with length 1.0 and 1.25 are more similar to each other than Pendulum with other lengths (see the supplementary material for supporting figures), which implies that our method can cluster environments in an unsupervised manner. Effects of trajectory-wise loss. To further investigate the effectiveness of trajectory-wise oracle loss, we compare our method to MCL, where we consider only a single transition for selecting the model to optimize, i.e., M = 1 in (1). Figure 5a and Figure 5b show that training trajectories are more distinctively assigned to each head when we use T-MCL, which implies that trajectory-wise loss is indeed important for learning specialized prediction heads. Also, as shown in Figure 5c, this leads to superior generalization performance over the dynamics model trained with MCL, showing that learning specialized prediction heads improves the generalization performance. Effects of multi-headed dynamics model. We also analyze the isolated effect of employing multiheaded architecture on the generalization performance. To this end, we train the multi-headed version of PETS, i.e., ensemble of multi-headed dynamics models without trajectory-wise oracle loss, context learning, and adaptive planning. Figure 6a shows that multi-headed PETS does not improve the performance of vanilla PETS on HalfCheetah environments, which demonstrates the importance of training with trajectory-wise oracle loss and adaptively selecting the most accurate prediction head for achieving superior generalization performance of our method. Effects of adaptive planning. We investigate the importance of selecting the specialized prediction head adaptively. Specifically, we compare the performance of employing the proposed adaptive planning method to the performance of employing non-adaptive planning, i.e., planning with the average predictions of prediction heads. As shown in Figure 6b, the gain due to adaptive planning is significant, which confirms that proposed adaptive planning is important. Effects of context learning. We examine our choice of integrating context learning by comparing the performance of a context-conditional multi-headed dynamics model to the performance of a multi-headed dynamics model. As shown in Figure 6c, removing context learning scheme from the T-MCL results in steep performance degradation, which demonstrates the importance of incorporating contextual information. However, we remark that the T-MCL still outperforms PETS without context learning scheme. Also, we visualize the hidden features of a context-conditional multi-headed dynamics model on HalfCheetah environments using t-SNE [26] in Figure 6d. One can observe that features from the environments with different transition dynamics are separated in the embedding space, which implies that our method indeed learns meaningful contextual information. Effects of hyperparameters. Finally, we investigate how hyperparameters affect the performance of T-MCL. Specifically, we consider three hyperparameters, i.e., H ∈ {2, 3, 4, 5, 8} for the number of prediction heads in (2), M ∈ {1, 5, 10, 20, 30} for the horizon of trajectory-wise oracle loss in (2), and N ∈ {1, 5, 10, 20, 30} for the horizon of adaptive planning in (3). Figure 7a shows that H = 3 achieves the best performance because three prediction heads are enough to capture the multi-modality of the training environments in our setting. When H > 3, the performance decreases because trajectories from similar dynamics are split into multiple heads. Figure 7b and Figure 7c show that our method is robust to the horizons M,N , and considering more transitions can further improve the performance. We provide results for all environments in the supplementary material. 6 Conclusion In this work, we present trajectory-wise multiple choice learning, a new model-based RL algorithm that learns a multi-headed dynamics model for dynamics generalization. Our method consists of three key ingredients: (a) trajectory-wise oracle loss for multi-headed dynamics model, (b) contextconditional multi-headed dynamics model, and (c) adaptive planning. We show that our method can capture the multi-modal nature of environments in an unsupervised manner, and outperform existing model-based RL methods. Overall, we believe our approach would further strengthen the understanding of dynamics generalization and could be useful to other relevant topics such as model-based policy optimization methods [15, 16]. Broader Impact While deep reinforcement learning (RL) has been successful in a range of challenging domains, it still suffers from a lack of generalization ability to unexpected changes in surrounding environmental factors [20, 30]. This failure of autonomous agents to generalize across diverse environments is one of the major reasoning behind the objection to real-world deployment of RL agents. To tackle this problem, in this paper, we focus on developing more robust and generalizable RL algorithm, which could improve the applicability of deep RL to various real-world applications, such as robotics manipulation [17] and package delivery [2]. Such advances in the robustness of RL algorithm could contribute to improved productivity of society via the safe and efficient utilization of autonomous agents in a diverse range of industries. Unfortunately, however, we could also foresee the negative long-term consequences of deploying autonomous systems in the real-world. For example, autonomous agents could be abused by specifying harmful objectives such as autonomous weapons. While such malicious usage of autonomous agents was available long before the advent of RL algorithms, developing an RL algorithm for dynamics generalization may accelerate the real-world deployment of such malicious robots, e.g., autonomous drones loaded with explosives, by making them more robust to changing dynamics or defense systems. We would like to recommend the researchers to recognize this potential misuse as we further improve RL systems. Acknowledgments and Disclosure of Funding We thank Junsu Kim, Seunghyun Lee, Jongjin Park, Sihyun Yu, and our anonymous reviewers for feedback and discussions. This research is supported in part by ONR PECASE N000141612723, Tencent, Berkeley Deep Drive, Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)), and Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF2018R1A5A1059921).
1. What is the focus and contribution of the paper regarding model-based reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its technical contributions and empirical evaluation? 3. Do you have any concerns or weaknesses regarding the approach's performance compared to model-free methods or the number of heads used? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies the problem of learning robust generalizable dynamics models for model-based reinforcement learning in environments with multi-modal dynamics. Authors present a novel model-based RL algorithm based on a multi-headed dynamics model that is a) optimized trajectory-wise, b) conditioned on prior context and c) whose heads are selected adaptively for optimal planning. The presented model outperforms state-of-the-art model-based baselines and results comparable to a model-free algorithm. Strengths - The problem and approach are well motivated and grounded in theory and prior work. - Three clear technical contributions are presented in the paper: a) a multi-headed dynamics model trained via a novel trajectory-wise oracle loss, b) a context encoder that conditions the multi-headed dynamics predictions on prior states and actions, and c) an adaptive planning method that selects the dynamics model head based on how well heads predicted in the current environment in the past. - The empirical evaluation is thorough, and well motivated and organized. - The approach outperforms state-of-the-art model-based baselines on classic control problems and simulated robotic continuous tasks. - Qualitative results show that all heads are indeed utilized. - Ablation studies show that each of the presented contributions helps with performance. - Authors took the time to think properly about the broader impact of their work. - The paper is well-written, well-structured and easily understandable. - This work is both significant in that it addressed changing dynamical environments and generalization across environments which is an outstanding problem for model-based methods and is to my knowledge novel to the NeurIPS community. Weaknesses - The approach does not seem to outperform the model-free PEARL baseline on every task at convergence. - The number of heads has been fixed to three and either a better explanation could have been given why three heads were chosen or a more detailed analysis would be necessary if three is the optimal number of heads or whether the number of heads needs to change depending on how strongly and how many physical parameters vary in the environment. - Are there any interesting qualitative results on the control tasks? It might be interesting to visualize what happens when the different heads take over control for the different tasks. E.g. how does it look like when the best head steers the ant with crippled legs vs how does it look like when the worst head steers it? (assuming it is possible to manually fix the head in your method of course)
NIPS
Title Linear Contextual Bandits with Knapsacks Abstract We consider the linear contextual bandit problem with resource consumption, in addition to reward generation. In each round, the outcome of pulling an arm is a reward as well as a vector of resource consumptions. The expected values of these outcomes depend linearly on the context of that arm. The budget/capacity constraints require that the total consumption doesn’t exceed the budget for each resource. The objective is once again to maximize the total reward. This problem turns out to be a common generalization of classic linear contextual bandits (linContextual) [8, 11, 1], bandits with knapsacks (BwK) [3, 9], and the online stochastic packing problem (OSPP) [4, 14]. We present algorithms with near-optimal regret bounds for this problem. Our bounds compare favorably to results on the unstructured version of the problem [5, 10] where the relation between the contexts and the outcomes could be arbitrary, but the algorithm only competes against a fixed set of policies accessible through an optimization oracle. We combine techniques from the work on linContextual, BwK and OSPP in a nontrivial manner while also tackling new difficulties that are not present in any of these special cases. 1 Introduction In the contextual bandit problem [8, 2], the decision maker observes a sequence of contexts (or features). In every round she needs to pull one out of K arms, after observing the context for that round. The outcome of pulling an arm may be used along with the contexts to decide future arms. Contextual bandit problems have found many useful applications such as online recommendation systems, online advertising, and clinical trials, where the decision in every round needs to be customized to the features of the user being served. The linear contextual bandit problem [1, 8, 11] is a special case of the contextual bandit problem, where the outcome is linear in the feature vector encoding the context. As pointed by [2], contextual bandit problems represent a natural half-way point between supervised learning and reinforcement learning: the use of features to encode contexts and the models for the relation between these feature vectors and the outcome are often inherited from supervised learning, while managing the exploration-exploitation tradeoff is necessary to ensure good performance in reinforcement learning. The linear contextual bandit problem can thus be thought of as a midway between the linear regression model of supervised learning, and reinforcement learning. Recently, there has been a significant interest in introducing multiple “global constraints” in the standard bandit setting [9, 3, 10, 5]. Such constraints are crucial for many important real-world applications. For example, in clinical trials, the treatment plans may be constrained by the total availability of medical facilities, drugs and other resources. In online advertising, there are budget constraints that restrict the number of times an ad is shown. Other applications include dynamic pricing, dynamic procurement, crowdsourcing, etc.; see [9, 3] for many such examples. In this paper, we consider the linear contextual bandit with knapsacks (henceforth, linCBwK) problem. In this problem, the context vectors are generated i.i.d. in every round from some unknown distribution, and on picking an arm, a reward and a consumption vector is observed, which depend ∗Columbia University. [email protected]. †Microsoft Research. [email protected]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. linearly on the context vector. The aim of the decision maker is to maximize the total reward while ensuring that the total consumption of every resource remains within a given budget. Below, we give a more precise definition of this problem. We use the following notational convention throughout: vectors are denoted by bold face lower case letters, while matrices are denoted by regular face upper case letters. Other quantities such as sets, scalars, etc. may be of either case, but never bold faced. All vectors are column vectors, i.e., a vector in n dimensions is treated as an n× 1 matrix. The transpose of matrix A is A⊤. Definition 1 (linCBwK). There are K “arms”, which we identify with the set [K]. The algorithm is initially given as input a budget B ∈ R+. In every round t, the algorithm first observes context xt(a) ∈ [0, 1]m for every arm a, and then chooses an arm at ∈ [K], and finally observes a reward rt(at) ∈ [0, 1] and a d-dimensional consumption vector vt(at) ∈ [0, 1]d. The algorithm has a “no-op” option, which is to pick none of the arms and get 0 reward and 0 consumption. The goal of the algorithm is to pick arms such that the total reward ∑T t=1 rt(at) is maximized, while ensuring that the total consumption does not exceed the budget, i.e., ∑ t vt(at) ≤ B1. We make the following stochastic assumption for context, reward, and consumption vectors. In every round t, the tuple {xt(a), rt(a),vt(a)}Ka=1 is generated from an unknown distribution D, independent of everything in previous rounds. Also, there exists an unknown vector µ∗ ∈ [0, 1]m and a matrix W∗ ∈ [0, 1]m×d such that for every arm a, given contexts xt(a), and history Ht−1 before time t, E[rt(a)|xt(a), Ht−1] = µ⊤∗ xt(a), E[vt(a)|xt(a), Ht−1] = W⊤∗ xt(a). (1) For succinctness, we will denote the tuple of contexts for K arms at time t as matrix Xt ∈ [0, 1]m×K , with xt(a) being the a th column of this matrix. Similarly, rewards and consumption vectors at time t are represented as the vector rt ∈ [0, 1]K and the matrix Vt ∈ [0, 1]d×K respectively. As we discuss later in the text, the assumption in equation (1) forms the primary distinction between our linear contextual bandit setting and the general contextual bandit setting considered in [5]. Exploiting this linearity assumption will allow us to generate regret bounds which do not depend on the number of arms K, rendering it to be especially useful when the number of arms is large. Some examples of this include recommendation systems with large number of products (e.g., retail products, travel packages, ad creatives, sponsored facebook posts). Another advantage over using the general contextual bandit setting of [5] is that we don’t need an oracle access to a certain optimization problem, which in this case is required to solve an NP-Hard problem. (See Section 1.1 for a more detailed discussion.) We compare the performance of an algorithm to that of an optimal adaptive policy that knows the distribution D and the parameters (µ∗,W∗), and can take into account the history up to that point, as well as the current context, to decide (possibly with randomization) which arm to pull at time t. However, it is easier to work with an upper bound on this, which is the optimal expected reward of a static policy that is required to satisfy the constraints only in expectation. This technique has been used in several related problems and is standard by now [14, 9]. Definition 2 (Optimal Static Policy). A context-dependent non-adaptive policy π is a mapping from context space [0, 1]m×K to Ω = {p ∈ [0, 1]K : ‖p‖1 ≤ 1}, where π(X)i denotes the probability of playing arm i when the context is X , and 1−∑Ki=1 π(X)i is the probability of no-op. Define r(π) and v(π) to be the expected reward and consumption vector of policy π, respectively, i.e. r(π) := E(X,r,V )∼D[rπ(X)] = EX∼D[µ ⊤ ∗ Xπ(X)]. (2) v(π) := E(X,r,V )∼D[V π(X)] = EX∼D[W ⊤ ∗ Xπ(X)]. (3) Let π∗ := argmaxπ T r(π) such that T v(π) ≤ B1 (4) be the optimal static policy. Note that since no-op is allowed, a feasible policy always exists. We denote the value of this optimal static policy by OPT := T r(π∗). The following lemma proves that OPT upper bounds the value of an optimal adaptive policy. Proof is in Appendix B in the supplement. Lemma 1. Let OPT denote the value of an optimal adaptive policy that knows the distribution D and parameters µ∗,W∗. Then OPT ≥ OPT. Definition 3 (Regret). Let at be the arm played at time t by the algorithm. Then, regret is defined as regret(T ) := OPT − T ∑ t=1 rt(at). 1.1 Main results Our main result is an algorithm with near-optimal regret bound for linCBwK. Theorem 1. There is an algorithm for linCBwK such that if B > m1/2T 3/4, then with probability at least 1− δ, regret(T ) = O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . Relation to general contextual bandits. There have been recent papers [5, 10] that solve problems similar to linCBwK but for general contextual bandits. In these papers the relation between contexts and outcome vectors is arbitrary and the algorithms compete with an arbitrary fixed set of context dependent policies Π accessible via an optimization oracle, with regret bounds being O ( (OPTB + 1) √ KT log(dT |Π|/δ) ) . These approaches could potentially be applied to the linear setting using a set Π of linear context dependent policies. Comparing their bounds with ours, in our results, essentially a √ K log(|Π|) factor is replaced by a factor of m. Most importantly, we have no dependence on K,3 which enables us to consider problems with large action spaces. Further, suppose that we want to use their result with the set of linear policies, i.e., policies of the form, for some fixed θ ∈ ℜm, arg max a∈[K] {xt(a)⊤θ}. Then, their algorithms would require access to an “Arg-Max Oracle” that can find the best such policy (maximizing total reward) for a given set of contexts and rewards (no resource consumption). In fact, by a reduction from the problem of learning halfspaces with noise [16], we can show that the optimization problem underlying such an “Arg-Max Oracle” problem is NP-Hard, making such an approach computationally expensive. The proof of this is in Appendix C in the supplement. The only downside to our results is that we need the budget B to be Ω(m1/2T 3/4). Getting similar bounds for budgets as small as B = Θ(m √ T ) is an interesting open problem. (This also indicates that this is indeed a harder problem than all the special cases.) Near-optimality of regret bounds. In [12], it was shown that for the linear contextual bandits problem, no online algorithm can achieve a regret bound better than Ω(m √ T ). In fact, they prove this lower bound for linear contextual bandits with static contexts. Since that problem is a special case of the linCBwK problem with d = 1, this shows that the dependence on m and T in the above regret bound is optimal upto log factors. For general contextual bandits with resource constraints, the bounds of [5, 10] are near optimal. Relation to BwK [3] and OSPP [4]. It is easy to see that the linCBwK problem is a generalization of the linear contextual bandits problem [1, 8, 11]. There, the outcome is scalar and the goal is to simply maximize the sum of these. Remarkably, the linCBwK problem also turns out to be a common generalization of the bandits with knapsacks (BwK) problem considered in [9, 3], and the online stochastic packing problem (OSPP) studied by [13, 6, 15, 14, 4]. In both BwK and OSPP, the outcome of every round t is a reward rt and a vector vt and the goal of the algorithm is to maximize ∑T t=1 rt while ensuring that ∑T t=1 vt ≤ B1. The problems differ in how these rewards and vectors are picked. In the OSPP problem, in every round t, the algorithm may pick any reward,vector pair from a given set At of d + 1-dimensional vectors. The set At is drawn i.i.d. from an unknown distribution over sets of vectors. This corresponds to the special case of linCBwK, where m = d+ 1 and the context xt(a) itself is equal to (rt(a),vt(a)). In the BwK problem, there is a fixed set of arms, and for each arm there is an unknown distribution over reward,vector pairs. The algorithm picks an arm and a reward,vector pair is drawn from the corresponding distribution for that arm. This 3Similar to the regret bounds for linear contextual bandits [8, 1, 11]. corresponds to the special case of linCBwK, where m = K and the context Xt = I, the identity matrix, for all t. We use techniques from all three special cases: our algorithms follow the primal-dual paradigm and use an online learning algorithm to search the dual space, as was done in [3]. In order to deal with linear contexts, we use techniques from [1, 8, 11] to estimate the weight matrix W∗, and define “optimistic estimates” of W∗. We also use the technique of combining the objective and the constraints using a certain tradeoff parameter and that was introduced in [4]. Further new difficulties arise, such as in estimating the optimum value from the first few rounds, a task that follows from standard techniques in each of the special cases but is very challenging here. We develop a new way of exploration that uses the linear structure, so that one can evaluate all possible choices that could have led to an optimum solution on the historic sample. This technique might be of independent interest in estimating optimum values. One can see that the problem is indeed more than the sum of its parts, from the fact that we get the optimal bound for linCBwK only when B ≥ Ω̃(m1/2T 3/4), unlike either special case for which the optimal bound holds for all B (but is meaningful only for B = Ω̃(m √ T )). The approach in [3] (for BwK) extends to the case of “static” contexts,4 where each arm has a context that doesn’t change over time. The OSPP of [4] is not a special case of linCBwK with static contexts; this is one indication of the additional difficulty of dynamic over static contexts. Other related work. Recently, [17] showed an O( √ T ) regret in the linear contextual setting with a single budget constraint, when costs depend only on contexts and not arms. Due to space constraints, we have moved many proofs from the main part of the paper to the supplement. 2 Preliminaries 2.1 Confidence Ellipsoid Consider a stochastic process which in each round t, generates a pair of observations (rt,yt), such that rt is an unknown linear function of yt plus some 0-mean bounded noise, i.e., rt = µ ⊤ ∗ yt + ηt, where yt,µ∗ ∈ Rm, |ηt| ≤ 2R, and E[ηt|y1, r1, . . . ,yt−1, rt−1,yt] = 0. At any time t, a high confidence estimate of the unknown vector µ∗ can be obtained by building a “confidence ellipsoid” around the ℓ2-regularized least-squares estimate µ̂t constructed from the observations made so far. This technique is common in prior work on linear contextual bandits (e.g., in [8, 11, 1]). For any regularization parameter λ > 0, let Mt := λI + ∑t−1 i=1 yiy ⊤ i , and µ̂t := M −1 t ∑t−1 i=1 yiri. The following result from [1] shows that µ∗ lies with high probability in an ellipsoid with center µ̂t. For any positive semi-definite (PSD) matrix M, define the M -norm as ‖µ‖M := √ µ⊤Mµ. The confidence ellipsoid at time t is defined as Ct := { µ ∈ Rm : ‖µ− µ̂t‖Mt ≤ R √ m log ((1+tm/λ)/δ) + √ λm } . Lemma 2 (Theorem 2 of [1]). If ∀ t, ‖µ∗‖2 ≤ √ m and ‖yt‖2 ≤ √ m, then with prob. 1 − δ, µ∗ ∈ Ct. Another useful observation about this construction is stated below. It first appeared as Lemma 11 of [8], and was also proved as Lemma 3 in [11]. Lemma 3 (Lemma 11 of [8]). ∑T t=1 ‖yt‖M−1t ≤ √ mT log(T ). As a corollary of the above two lemmas, we obtain a bound on the total error in the estimate provided by “any point” from the confidence ellipsoid. (Proof is in Appendix D in the supplement.) 4It was incorrectly claimed in [3] that the approach can be extended to dynamic contexts without much modifications. Corollary 1. For t = 1, . . . , T , let µ̃t ∈ Ct be a point in the confidence ellipsoid, with λ = 1 and 2R = 1. Then, with probability 1− δ, ∑T t=1 |µ̃⊤t yt − µ⊤∗ yt| ≤ 2m √ T log ((1+Tm)/δ) log(T ). 2.2 Online Learning Consider a T round game played between an online learner and an adversary, where in round t, the learner chooses a θt ∈ Ω := {θ : ‖θ‖1 ≤ 1,θ ≥ 0}, and then observes a linear function gt : Ω → [−1, 1] picked by the adversary. The learner’s choice θt may only depend on learner’s and adversary’s choices in previous rounds. The goal of the learner is to minimize regret defined as the difference between the learner’s objective value and the value of the best single choice in hindsight: R(T ) := maxθ∈Ω ∑T t=1 gt(θ)− ∑T t=1 gt(θt). The multiplicative weight update (MWU) algorithm (generalization by [7]) is a fast and efficient online learning algorithm for this problem. Let gt,j := gt(1j). Then, given a parameter ǫ > 0, in round t+ 1, the choice of this algorithm takes the following form, θt+1,j = wt,j 1 + ∑ j wt,j , where wt,j = { wt−1,j(1 + ǫ) gt,j if gt,j > 0, wt−1,j(1− ǫ)−gt,j if gt,j ≤ 0. (5) with initialization w0,j = 1, for all j = 1, . . . ,K. Lemma 4. [7] For any 0 < ǫ ≤ 12 , the MWU algorithm provides the following regret bound for the online learning problem described above: R(T ) ≤ ǫT + log(d+1)ǫ . In particular, for ǫ = √ log(d+1) T , we have R(T ) ≤ √ log(d+ 1)T For the rest of the paper, we refer to the MWU algorithm with ǫ = √ log(d+1) T as the online learning (OL) algorithm, and the update in (5) as the OL update at time t+ 1. 3 Algorithm 3.1 Optimistic estimates of unknown parameters Let at denote the arm played by the algorithm at time t. In the beginning of every round, we use the outcomes and contexts from previous rounds to construct a confidence ellipsoid for µ∗ and every column of W∗. The construction of confidence ellipsoid for µ∗ follows directly from the techniques in Section 2.1 with yt = xt(at) and rt being reward at time t. To construct a confidence ellipsoid for a column j of W∗, we use the techniques in Section 2.1 while substituting yt = xt(at) and rt = vt(at)j for every j. As in Section 2.1, let Mt := I + ∑t−1 i=1 xi(ai)xi(ai) ⊤, and construct the regularized least squares estimate for µ∗,W∗, respectively, as µ̂t := M −1 t ∑t−1 i=1 xi(ai)ri(ai) ⊤ (6) Ŵt := M −1 t ∑t−1 i=1 xi(ai)vi(ai) ⊤. (7) Define confidence ellipsoid for parameter µ∗ as Ct,0 := { µ ∈ Rm : ‖µ− µ̂‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and for every arm a, the optimistic estimate of µ∗ as: µ̃t(a) := argmaxµ∈Ct,0 xt(a) ⊤µ. (8) Let wj denote the j th column of a matrix W . We define a confidence ellipsoid for each column j, as Ct,j := { w ∈ Rm : ‖w − ŵtj‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and denote by Gt, the Cartesian product of all these ellipsoids: Gt := {W ∈ Rm×d : wj ∈ Ct,j}. Note that Lemma 2 implies that W∗ ∈ Gt with probability 1− δ. Now, given a vector θt ∈ Rd, we define the optimistic estimate of the weight matrix at time t w.r.t. θt, for every arm a ∈ [K], as : W̃t(a) := argminW∈Gt xt(a) ⊤Wθt. (9) Intuitively, for the reward, we want an upper confidence bound and for the consumption we want a lower confidence bound as an optimistic estimate. This intuition aligns with the above definitions, where the maximizer was used in case of reward and a minimizer was used for consumption. The utility and precise meaning of θt will become clearer when we describe the algorithm and present the regret analysis. Using the definition of µ̃t, W̃t, along with the results in Lemma 2 and Corollary 1 about confidence ellipsoids, the following can be derived. Corollary 2. With probability 1− δ, for any sequence of θ1,θ2, . . . ,θT , 1. xt(a) ⊤µ̃t(a) ≥ xt(a)⊤µ∗, for all arms a ∈ [K], for all time t. 2. xt(a) ⊤W̃t(a)θt ≤ xt(a)⊤W∗θt, for all arms a ∈ [K], for all time t. 3. | ∑T t=1(µ̃t(at)− µ∗)⊤xt(at)| ≤ ( 2m √ T log ((1+tm)/δ) log(T ) ) . 4. ‖ ∑T t=1(W̃t(at)−W∗)⊤xt(at)‖ ≤ ‖1d‖ ( 2m √ T log ((d+tmd)/δ) log(T ) ) . Essentially, the first two claims ensure that we have optimistic estimates, and the last two claims ensure that the estimates quickly converge to the true parameters. 3.2 The core algorithm In this section, we present an algorithm and its analysis, under the assumption that a parameter Z satisfying certain properties is given. Later, we show how to use the first T0 rounds to compute such a Z, and also bound the additional regret due to these T0 rounds. We define Z now. Assumption 1. Let Z be such that for some universal constants c, c′, OPTB ≤ Z ≤ cOPTB + c′. The algorithm constructs estimates µ̂t and Ŵt as in Section 3.1. It also runs the OL algorithm for an instance of the online learning problem. The vector played by the OL algorithm in time step t is θt. After observing the context, the optimistic estimates for each arm are then constructed using θt, as defined in (8) and (9). Intuitively, θt is used here as a multiplier to combine different columns of the weight matrix, to get an optimistic weight vector for every arm. An adjusted estimated reward for arm a is then defined by using Z to linearly combine the optimistic estimate of the reward with the optimistic estimate of the consumption, as (xt(a) ⊤µ̃t(a))− Z(xt(a)⊤W̃t(a)θt). The algorithm chooses the arm which appears to be the best according to the adjusted estimated reward. After observing the resulting reward and consumption vectors, the estimates are updated. The online learning algorithm is advanced by one step, by defining the profit vector to be vt(at) − BT 1. The algorithm ends either after T time steps or as soon as the total consumption exceeds the budget along some dimension. Theorem 2. Given a Z as per Assumption 1, Algorithm 1 achieves the following, with prob. 1− δ: regret(T ) ≤ O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . (Proof Sketch) We provide a sketch of the proof here, with a full proof given in Appendix E in the supplement. Let τ be the stopping time of the algorithm. The proof is in 3 steps: Step 1: Since E[vt(at)|Xt, at, Ht−1] = W⊤∗ xt(at), we apply Azuma-Hoeffding inequality to get that with high probability ∥ ∥ ∑τ t=1 vt(at)−W⊤∗ xt(at) ∥ ∥ ∞ is small. Therefore, we can work with ∑τ t=1 W ⊤ ∗ xt(at) instead of ∑τ t=1 vt(at). A similar application of Azuma-Hoeffding inequality is used to bound the gap | ∑τ t=1 rt(at) − µ⊤∗ xt(at)|, so that a lower bound on ∑τ t=1 µ ⊤ ∗ xt(at) is sufficient to lower bound the total reward ∑τ t=1 rt(at). Algorithm 1 Algorithm for linCBwK, with given Z Initialize θ1 as per the online learning (OL) algorithm. Initialize Z that satisfies Assumption 1. for all t = 1, ..., T do Observe Xt. For every a ∈ [K], compute µ̃t(a) and W̃t(a) as per (8) and (9) respectively. Play the arm at := argmaxa∈[K] xt(a) ⊤(µ̃t(a)− ZW̃t(a)θt). Observe rt(at) and vt(at). If for some j = 1..d, ∑ t′≤t vt′(at′) · ej ≥ B then EXIT. Use xt(at), rt(at) and vt(at) to obtain µ̂t+1, Ŵt+1 and Gt+1. Choose θt+1 using the OL update (refer to (5)) with gt(θt) := θt · ( vt(at)− BT 1 ) . end for Step 2: Using Corollary 2, with high probability, we can bound ∥ ∥ ∥ ∑T t=1(W∗ − W̃t(at))⊤xt(at) ∥ ∥ ∥ ∞ . It is therefore sufficient to work with the sum of vectors W̃t(at) ⊤xt(at) instead of W ⊤ ∗ xt(at), and similarly with µ̃t(at) ⊤xt(at) instead of µ ⊤ ∗ xt(at). Step 3: The proof is completed by showing the desired bound on OPT − ∑τ t=1 µ̃t(at) ⊤xt(at). This part is similar to the online stochastic packing problem; if the actual reward and consumption vectors were µ̃t(at) ⊤xt(at) and W̃t(at) ⊤xt(at), then it would be exactly that problem. We adapt techniques from [4]: use the OL algorithm and the Z parameter to combine constraints into the objective. If a dimension is being consumed too fast, then the multiplier for that dimension should increase, making the algorithm to pick arms that are not likely to consume too much along this dimension. Regret is then bounded by a combination of the online learning regret and the error in the optimistic estimates. 3.3 Algorithm with Z computation In this section, we present a modification of Algorithm 1 which computes the required parameter Z that satisfies Assumption 1, and therefore does not need to be provided with a Z as input. The algorithm computes Z using observations from the first T0 rounds. Once Z is computed, Algorithm 1 can be run for the remaining time steps. However, it needs to be modified slightly to take into account the budget consumed during the first T0 rounds. We handle this by using a smaller budget B′ = B − T0 in the computations for the remaining rounds. The modified algorithm is given below. Algorithm 2 Algorithm for linCBwK, with Z computation Inputs: B, T0, B ′ = B − T0 Using observations from first T0 rounds, compute a Z that satisfies Assumption 1. Run Algorithm 1 for T − T0 rounds and budget B′. Next, we provide the details of how to compute Z from observations in the first T0 rounds, and how to choose T0. We provide a method that takes advantage of the linear structure of the problem, and explores in the m-dimensional space of contexts and weight vectors to obtain bounds independent of K. In every round t = 1, . . . , T0, after observing Xt, let pt ∈ ∆[K] be pt := arg max p∈∆[K] ‖Xtp‖M−1t , (10) where Mt := I + ∑t−1 i=1(Xipi)(Xipi) ⊤. (11) Select arm at = a with probability pt(a). In fact, since Mt is a PSD matrix, due to convexity of the function ‖Xtp‖2M−1t , it is the same as playing at = argmaxa∈[K] ‖xt(a)‖M−1t . Construct estimates µ̂, Ŵt of µ∗,W∗ at time t as µ̂t := M −1 t ∑t−1 i=1(Xipi)ri(ai), Ŵt := M −1 t ∑t−1 i=1(Xipi)vi(ai) ⊤. And, for some value of γ defined later, obtain an estimate ˆOPT γ of OPT as: ˆOPT γ := maxπ T T0 ∑T0 i=1 µ̂ ⊤ i Xiπ(Xi) such that TT0 ∑T0 i=1 Ŵ ⊤ i Xiπ(Xi) ≤ B + γ. (12) For an intuition about the choice of arm in (10), observe from the discussion in Section 2.1 that every column w∗j of W∗ is guaranteed to lie inside the confidence ellipsoid centered at column ŵtj of Ŵt, namely the ellipsoid, ‖w − ŵtj‖2Mt ≤ 4m log(Tm/δ). Note that this ellipsoid has principle axes as eigenvectors of Mt, and the length of the semi-principle axes is given by the inverse eigenvalues of Mt. Therefore, by maximizing ‖Xtp‖M−1t we are choosing the context closest to the direction of the longest principal axis of the confidence ellipsoid, i.e. in the direction of the maximum uncertainty. Intuitively, this corresponds to pure exploration: by making an observation in the direction where uncertainty is large we can reduce the uncertainty in our estimate most effectively. A more algebraic explanation is as follows. In order to get a good estimate of OPT by ˆOPT γ , we want the estimates Ŵt and W∗ (and, µ̂ and µ∗) to be close enough so that ‖ ∑T0 t=1(Ŵt−Ŵ∗)⊤Xtπ(Xt)‖∞ (and, |∑T0t=1(µ̂t − µ∗)⊤Xtπ(Xt)|) is small for all policies π, and in particular for sample optimal policies. Now, using Cauchy-Schwartz these are bounded by ∑T0 t=1 ‖µ̂t − µ∗‖Mt‖Xtπ(Xt))‖M−1t , and ∑T0 t=1 ‖Ŵt −W∗‖Mt‖Xtπ(Xt))‖M−1t , where we define ‖W‖M , the M -norm of matrix W to be the max of column-wise M -norms. Using Lemma 2, the term ‖µ̂t−µ∗‖Mt is bounded by 2 √ m log(T0m/δ) , and ‖Ŵt−W∗‖Mt is bounded by 2 √ m log(T0md/δ), with probability 1−δ. Lemma 3 bounds the second term ∑T0 t=1 ‖Xtπ(Xt)‖M−1t but only when π is the played policy. This is where we use that the played policy pt was chosen to maximize ‖Xtpt‖M−1t , so that ∑T0 t=1 ‖Xtπ(Xt)‖M−1t ≤ ∑T0 t=1 ‖Xtpt‖M−1t and the bound ∑T0 t=1 ‖Xtpt‖M−1t ≤ √ mT0 log(T0) given by Lemma 3 actually bounds ∑T0 t=1 ‖Xtπ(Xt)‖M−1t for all π. Combining, we get a bound of 2m √ T0log(T0) log(T0d/δ) on deviations ‖ ∑T0 t=1(Ŵt − Ŵ∗) ⊤Xtπ(Xt)‖∞ and | ∑T0 t=1(µ̂t − µ∗)⊤Xtπ(Xt)| for all π. We prove the following lemma. Lemma 5. For γ = ( T T0 ) 2m √ T0log(T0) log(T0d/δ), with probability 1−O(δ), OPT − 2γ ≤ ˆOPT2γ ≤ OPT + 9γ(OPTB + 1). Corollary 3. Set Z = ( ˆOPT 2γ +2γ) B + 1, with the above value of γ. Then, with probability 1−O(δ), OPT B + 1 ≤ Z ≤ (1 + 11γ B )( OPT B + 1). Corollary 3 implies that as long as B ≥ γ, i.e., B ≥ Ω̃( mT√ T0 ), Z is a constant factor approximation of OPT B +1 ≥ Z∗, therefore Theorem 2 should provide an Õ ( (OPTB + 1)m √ T ) regret bound. However, this bound does not account for the budget consumed in the first T0 rounds. Considering that (at most) T0 amount can be consumed from the budget in the first T0 rounds, we have an additional regret of OPT B T0. Further, since we have B ′ = B − T0 budget for remaining T − T0 rounds, we need a Z that satisfies the required assumption for B′ instead of B (i.e., we need OPTB′ ≤ Z ≤ O(1) ( OPT B′ + 1 ) ). If B ≥ 2T0, then, B′ ≥ B/2, and using 2 times the Z computed in Corollary 3 would satisfy the required assumption. Together, these observations give Theorem 3. Theorem 3. Using Algorithm 2 with T0 such that B > max{2T0,mT/ √ T0}, and twice the Z given by Corollary 3, we get a high probability regret bound of Õ ( ( OPT B + 1 ) ( T0 +m √ T )) . In particular, for B > m1/2T 3/4 and m ≤ √ T , we can use T0 = m √ T to get a regret bound of Õ ( ( OPT B + 1 ) m √ T ) .
1. What is the focus of the paper regarding the contextual linear bandit problem? 2. What are the strengths of the proposed algorithm and regret bound analysis? 3. Are there any concerns or questions regarding the presentation and clarity of certain aspects of the paper, such as the Mirror descent algorithm and the role of theta_t in the overall algorithm?
Review
Review The paper introduces a version of the contextual linear bandit problem with an additional vector-valued consumption process, along with the reward process, and a budget constraint added. The authors develop a bandit algorithm for the problem and derive a corresponding regret bound for it. The algorithm involves several ingredients like confidence ellipsoids from linear bandits, an online linear optimization subroutine using Mirror descent, and linear bandit exploration. The regret bound is orderwise seen to be tight in terms of the context dimension and time horizon problem parameters. EDIT: I read the author response. The paper studies an interesting and relevant problem in online learning with contextual side information. It is quite smooth to read and does a good job at explaining the key ideas in the problem setting, where the problem fits into the literature on contextual/budgeted bandits, and how the algorithmic techniques and results compare to those of existing approaches. While the results in the paper do seem solid, I think some aspects of the presentation are lacking in clarity and could be significantly improved. * (line 183) What exactly is meant by the "mirror descent algorithm"? It seems that the authors use it as a black-box term for a low-regret algorithm for the problem of online linear optimization over the simplex, which could perhaps be something like the exponentially weighted update algorithm. The authors could perhaps be more explicit about this, and mention the exact link function being used in the mirror descent algorithm (e.g., entropy on the simplex). * I could not clearly understand the role of theta_t (which is optimized somehow by the OCO/mirror descent subroutine) in the overall algorithm, from the explanations given in the paper. Lines 207-209 promise a clear explanation of this sequence but it does not appear clearly later on, except for within the technical proofs. From the structure of the OCO algorithm, {theta_t} seems to be some kind of a Lagrange multiplier or dual variable sequence corresponding to the consumption < budget constraint, but it is still unclear to me at a high level why an update rule is required.
NIPS
Title Linear Contextual Bandits with Knapsacks Abstract We consider the linear contextual bandit problem with resource consumption, in addition to reward generation. In each round, the outcome of pulling an arm is a reward as well as a vector of resource consumptions. The expected values of these outcomes depend linearly on the context of that arm. The budget/capacity constraints require that the total consumption doesn’t exceed the budget for each resource. The objective is once again to maximize the total reward. This problem turns out to be a common generalization of classic linear contextual bandits (linContextual) [8, 11, 1], bandits with knapsacks (BwK) [3, 9], and the online stochastic packing problem (OSPP) [4, 14]. We present algorithms with near-optimal regret bounds for this problem. Our bounds compare favorably to results on the unstructured version of the problem [5, 10] where the relation between the contexts and the outcomes could be arbitrary, but the algorithm only competes against a fixed set of policies accessible through an optimization oracle. We combine techniques from the work on linContextual, BwK and OSPP in a nontrivial manner while also tackling new difficulties that are not present in any of these special cases. 1 Introduction In the contextual bandit problem [8, 2], the decision maker observes a sequence of contexts (or features). In every round she needs to pull one out of K arms, after observing the context for that round. The outcome of pulling an arm may be used along with the contexts to decide future arms. Contextual bandit problems have found many useful applications such as online recommendation systems, online advertising, and clinical trials, where the decision in every round needs to be customized to the features of the user being served. The linear contextual bandit problem [1, 8, 11] is a special case of the contextual bandit problem, where the outcome is linear in the feature vector encoding the context. As pointed by [2], contextual bandit problems represent a natural half-way point between supervised learning and reinforcement learning: the use of features to encode contexts and the models for the relation between these feature vectors and the outcome are often inherited from supervised learning, while managing the exploration-exploitation tradeoff is necessary to ensure good performance in reinforcement learning. The linear contextual bandit problem can thus be thought of as a midway between the linear regression model of supervised learning, and reinforcement learning. Recently, there has been a significant interest in introducing multiple “global constraints” in the standard bandit setting [9, 3, 10, 5]. Such constraints are crucial for many important real-world applications. For example, in clinical trials, the treatment plans may be constrained by the total availability of medical facilities, drugs and other resources. In online advertising, there are budget constraints that restrict the number of times an ad is shown. Other applications include dynamic pricing, dynamic procurement, crowdsourcing, etc.; see [9, 3] for many such examples. In this paper, we consider the linear contextual bandit with knapsacks (henceforth, linCBwK) problem. In this problem, the context vectors are generated i.i.d. in every round from some unknown distribution, and on picking an arm, a reward and a consumption vector is observed, which depend ∗Columbia University. [email protected]. †Microsoft Research. [email protected]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. linearly on the context vector. The aim of the decision maker is to maximize the total reward while ensuring that the total consumption of every resource remains within a given budget. Below, we give a more precise definition of this problem. We use the following notational convention throughout: vectors are denoted by bold face lower case letters, while matrices are denoted by regular face upper case letters. Other quantities such as sets, scalars, etc. may be of either case, but never bold faced. All vectors are column vectors, i.e., a vector in n dimensions is treated as an n× 1 matrix. The transpose of matrix A is A⊤. Definition 1 (linCBwK). There are K “arms”, which we identify with the set [K]. The algorithm is initially given as input a budget B ∈ R+. In every round t, the algorithm first observes context xt(a) ∈ [0, 1]m for every arm a, and then chooses an arm at ∈ [K], and finally observes a reward rt(at) ∈ [0, 1] and a d-dimensional consumption vector vt(at) ∈ [0, 1]d. The algorithm has a “no-op” option, which is to pick none of the arms and get 0 reward and 0 consumption. The goal of the algorithm is to pick arms such that the total reward ∑T t=1 rt(at) is maximized, while ensuring that the total consumption does not exceed the budget, i.e., ∑ t vt(at) ≤ B1. We make the following stochastic assumption for context, reward, and consumption vectors. In every round t, the tuple {xt(a), rt(a),vt(a)}Ka=1 is generated from an unknown distribution D, independent of everything in previous rounds. Also, there exists an unknown vector µ∗ ∈ [0, 1]m and a matrix W∗ ∈ [0, 1]m×d such that for every arm a, given contexts xt(a), and history Ht−1 before time t, E[rt(a)|xt(a), Ht−1] = µ⊤∗ xt(a), E[vt(a)|xt(a), Ht−1] = W⊤∗ xt(a). (1) For succinctness, we will denote the tuple of contexts for K arms at time t as matrix Xt ∈ [0, 1]m×K , with xt(a) being the a th column of this matrix. Similarly, rewards and consumption vectors at time t are represented as the vector rt ∈ [0, 1]K and the matrix Vt ∈ [0, 1]d×K respectively. As we discuss later in the text, the assumption in equation (1) forms the primary distinction between our linear contextual bandit setting and the general contextual bandit setting considered in [5]. Exploiting this linearity assumption will allow us to generate regret bounds which do not depend on the number of arms K, rendering it to be especially useful when the number of arms is large. Some examples of this include recommendation systems with large number of products (e.g., retail products, travel packages, ad creatives, sponsored facebook posts). Another advantage over using the general contextual bandit setting of [5] is that we don’t need an oracle access to a certain optimization problem, which in this case is required to solve an NP-Hard problem. (See Section 1.1 for a more detailed discussion.) We compare the performance of an algorithm to that of an optimal adaptive policy that knows the distribution D and the parameters (µ∗,W∗), and can take into account the history up to that point, as well as the current context, to decide (possibly with randomization) which arm to pull at time t. However, it is easier to work with an upper bound on this, which is the optimal expected reward of a static policy that is required to satisfy the constraints only in expectation. This technique has been used in several related problems and is standard by now [14, 9]. Definition 2 (Optimal Static Policy). A context-dependent non-adaptive policy π is a mapping from context space [0, 1]m×K to Ω = {p ∈ [0, 1]K : ‖p‖1 ≤ 1}, where π(X)i denotes the probability of playing arm i when the context is X , and 1−∑Ki=1 π(X)i is the probability of no-op. Define r(π) and v(π) to be the expected reward and consumption vector of policy π, respectively, i.e. r(π) := E(X,r,V )∼D[rπ(X)] = EX∼D[µ ⊤ ∗ Xπ(X)]. (2) v(π) := E(X,r,V )∼D[V π(X)] = EX∼D[W ⊤ ∗ Xπ(X)]. (3) Let π∗ := argmaxπ T r(π) such that T v(π) ≤ B1 (4) be the optimal static policy. Note that since no-op is allowed, a feasible policy always exists. We denote the value of this optimal static policy by OPT := T r(π∗). The following lemma proves that OPT upper bounds the value of an optimal adaptive policy. Proof is in Appendix B in the supplement. Lemma 1. Let OPT denote the value of an optimal adaptive policy that knows the distribution D and parameters µ∗,W∗. Then OPT ≥ OPT. Definition 3 (Regret). Let at be the arm played at time t by the algorithm. Then, regret is defined as regret(T ) := OPT − T ∑ t=1 rt(at). 1.1 Main results Our main result is an algorithm with near-optimal regret bound for linCBwK. Theorem 1. There is an algorithm for linCBwK such that if B > m1/2T 3/4, then with probability at least 1− δ, regret(T ) = O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . Relation to general contextual bandits. There have been recent papers [5, 10] that solve problems similar to linCBwK but for general contextual bandits. In these papers the relation between contexts and outcome vectors is arbitrary and the algorithms compete with an arbitrary fixed set of context dependent policies Π accessible via an optimization oracle, with regret bounds being O ( (OPTB + 1) √ KT log(dT |Π|/δ) ) . These approaches could potentially be applied to the linear setting using a set Π of linear context dependent policies. Comparing their bounds with ours, in our results, essentially a √ K log(|Π|) factor is replaced by a factor of m. Most importantly, we have no dependence on K,3 which enables us to consider problems with large action spaces. Further, suppose that we want to use their result with the set of linear policies, i.e., policies of the form, for some fixed θ ∈ ℜm, arg max a∈[K] {xt(a)⊤θ}. Then, their algorithms would require access to an “Arg-Max Oracle” that can find the best such policy (maximizing total reward) for a given set of contexts and rewards (no resource consumption). In fact, by a reduction from the problem of learning halfspaces with noise [16], we can show that the optimization problem underlying such an “Arg-Max Oracle” problem is NP-Hard, making such an approach computationally expensive. The proof of this is in Appendix C in the supplement. The only downside to our results is that we need the budget B to be Ω(m1/2T 3/4). Getting similar bounds for budgets as small as B = Θ(m √ T ) is an interesting open problem. (This also indicates that this is indeed a harder problem than all the special cases.) Near-optimality of regret bounds. In [12], it was shown that for the linear contextual bandits problem, no online algorithm can achieve a regret bound better than Ω(m √ T ). In fact, they prove this lower bound for linear contextual bandits with static contexts. Since that problem is a special case of the linCBwK problem with d = 1, this shows that the dependence on m and T in the above regret bound is optimal upto log factors. For general contextual bandits with resource constraints, the bounds of [5, 10] are near optimal. Relation to BwK [3] and OSPP [4]. It is easy to see that the linCBwK problem is a generalization of the linear contextual bandits problem [1, 8, 11]. There, the outcome is scalar and the goal is to simply maximize the sum of these. Remarkably, the linCBwK problem also turns out to be a common generalization of the bandits with knapsacks (BwK) problem considered in [9, 3], and the online stochastic packing problem (OSPP) studied by [13, 6, 15, 14, 4]. In both BwK and OSPP, the outcome of every round t is a reward rt and a vector vt and the goal of the algorithm is to maximize ∑T t=1 rt while ensuring that ∑T t=1 vt ≤ B1. The problems differ in how these rewards and vectors are picked. In the OSPP problem, in every round t, the algorithm may pick any reward,vector pair from a given set At of d + 1-dimensional vectors. The set At is drawn i.i.d. from an unknown distribution over sets of vectors. This corresponds to the special case of linCBwK, where m = d+ 1 and the context xt(a) itself is equal to (rt(a),vt(a)). In the BwK problem, there is a fixed set of arms, and for each arm there is an unknown distribution over reward,vector pairs. The algorithm picks an arm and a reward,vector pair is drawn from the corresponding distribution for that arm. This 3Similar to the regret bounds for linear contextual bandits [8, 1, 11]. corresponds to the special case of linCBwK, where m = K and the context Xt = I, the identity matrix, for all t. We use techniques from all three special cases: our algorithms follow the primal-dual paradigm and use an online learning algorithm to search the dual space, as was done in [3]. In order to deal with linear contexts, we use techniques from [1, 8, 11] to estimate the weight matrix W∗, and define “optimistic estimates” of W∗. We also use the technique of combining the objective and the constraints using a certain tradeoff parameter and that was introduced in [4]. Further new difficulties arise, such as in estimating the optimum value from the first few rounds, a task that follows from standard techniques in each of the special cases but is very challenging here. We develop a new way of exploration that uses the linear structure, so that one can evaluate all possible choices that could have led to an optimum solution on the historic sample. This technique might be of independent interest in estimating optimum values. One can see that the problem is indeed more than the sum of its parts, from the fact that we get the optimal bound for linCBwK only when B ≥ Ω̃(m1/2T 3/4), unlike either special case for which the optimal bound holds for all B (but is meaningful only for B = Ω̃(m √ T )). The approach in [3] (for BwK) extends to the case of “static” contexts,4 where each arm has a context that doesn’t change over time. The OSPP of [4] is not a special case of linCBwK with static contexts; this is one indication of the additional difficulty of dynamic over static contexts. Other related work. Recently, [17] showed an O( √ T ) regret in the linear contextual setting with a single budget constraint, when costs depend only on contexts and not arms. Due to space constraints, we have moved many proofs from the main part of the paper to the supplement. 2 Preliminaries 2.1 Confidence Ellipsoid Consider a stochastic process which in each round t, generates a pair of observations (rt,yt), such that rt is an unknown linear function of yt plus some 0-mean bounded noise, i.e., rt = µ ⊤ ∗ yt + ηt, where yt,µ∗ ∈ Rm, |ηt| ≤ 2R, and E[ηt|y1, r1, . . . ,yt−1, rt−1,yt] = 0. At any time t, a high confidence estimate of the unknown vector µ∗ can be obtained by building a “confidence ellipsoid” around the ℓ2-regularized least-squares estimate µ̂t constructed from the observations made so far. This technique is common in prior work on linear contextual bandits (e.g., in [8, 11, 1]). For any regularization parameter λ > 0, let Mt := λI + ∑t−1 i=1 yiy ⊤ i , and µ̂t := M −1 t ∑t−1 i=1 yiri. The following result from [1] shows that µ∗ lies with high probability in an ellipsoid with center µ̂t. For any positive semi-definite (PSD) matrix M, define the M -norm as ‖µ‖M := √ µ⊤Mµ. The confidence ellipsoid at time t is defined as Ct := { µ ∈ Rm : ‖µ− µ̂t‖Mt ≤ R √ m log ((1+tm/λ)/δ) + √ λm } . Lemma 2 (Theorem 2 of [1]). If ∀ t, ‖µ∗‖2 ≤ √ m and ‖yt‖2 ≤ √ m, then with prob. 1 − δ, µ∗ ∈ Ct. Another useful observation about this construction is stated below. It first appeared as Lemma 11 of [8], and was also proved as Lemma 3 in [11]. Lemma 3 (Lemma 11 of [8]). ∑T t=1 ‖yt‖M−1t ≤ √ mT log(T ). As a corollary of the above two lemmas, we obtain a bound on the total error in the estimate provided by “any point” from the confidence ellipsoid. (Proof is in Appendix D in the supplement.) 4It was incorrectly claimed in [3] that the approach can be extended to dynamic contexts without much modifications. Corollary 1. For t = 1, . . . , T , let µ̃t ∈ Ct be a point in the confidence ellipsoid, with λ = 1 and 2R = 1. Then, with probability 1− δ, ∑T t=1 |µ̃⊤t yt − µ⊤∗ yt| ≤ 2m √ T log ((1+Tm)/δ) log(T ). 2.2 Online Learning Consider a T round game played between an online learner and an adversary, where in round t, the learner chooses a θt ∈ Ω := {θ : ‖θ‖1 ≤ 1,θ ≥ 0}, and then observes a linear function gt : Ω → [−1, 1] picked by the adversary. The learner’s choice θt may only depend on learner’s and adversary’s choices in previous rounds. The goal of the learner is to minimize regret defined as the difference between the learner’s objective value and the value of the best single choice in hindsight: R(T ) := maxθ∈Ω ∑T t=1 gt(θ)− ∑T t=1 gt(θt). The multiplicative weight update (MWU) algorithm (generalization by [7]) is a fast and efficient online learning algorithm for this problem. Let gt,j := gt(1j). Then, given a parameter ǫ > 0, in round t+ 1, the choice of this algorithm takes the following form, θt+1,j = wt,j 1 + ∑ j wt,j , where wt,j = { wt−1,j(1 + ǫ) gt,j if gt,j > 0, wt−1,j(1− ǫ)−gt,j if gt,j ≤ 0. (5) with initialization w0,j = 1, for all j = 1, . . . ,K. Lemma 4. [7] For any 0 < ǫ ≤ 12 , the MWU algorithm provides the following regret bound for the online learning problem described above: R(T ) ≤ ǫT + log(d+1)ǫ . In particular, for ǫ = √ log(d+1) T , we have R(T ) ≤ √ log(d+ 1)T For the rest of the paper, we refer to the MWU algorithm with ǫ = √ log(d+1) T as the online learning (OL) algorithm, and the update in (5) as the OL update at time t+ 1. 3 Algorithm 3.1 Optimistic estimates of unknown parameters Let at denote the arm played by the algorithm at time t. In the beginning of every round, we use the outcomes and contexts from previous rounds to construct a confidence ellipsoid for µ∗ and every column of W∗. The construction of confidence ellipsoid for µ∗ follows directly from the techniques in Section 2.1 with yt = xt(at) and rt being reward at time t. To construct a confidence ellipsoid for a column j of W∗, we use the techniques in Section 2.1 while substituting yt = xt(at) and rt = vt(at)j for every j. As in Section 2.1, let Mt := I + ∑t−1 i=1 xi(ai)xi(ai) ⊤, and construct the regularized least squares estimate for µ∗,W∗, respectively, as µ̂t := M −1 t ∑t−1 i=1 xi(ai)ri(ai) ⊤ (6) Ŵt := M −1 t ∑t−1 i=1 xi(ai)vi(ai) ⊤. (7) Define confidence ellipsoid for parameter µ∗ as Ct,0 := { µ ∈ Rm : ‖µ− µ̂‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and for every arm a, the optimistic estimate of µ∗ as: µ̃t(a) := argmaxµ∈Ct,0 xt(a) ⊤µ. (8) Let wj denote the j th column of a matrix W . We define a confidence ellipsoid for each column j, as Ct,j := { w ∈ Rm : ‖w − ŵtj‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and denote by Gt, the Cartesian product of all these ellipsoids: Gt := {W ∈ Rm×d : wj ∈ Ct,j}. Note that Lemma 2 implies that W∗ ∈ Gt with probability 1− δ. Now, given a vector θt ∈ Rd, we define the optimistic estimate of the weight matrix at time t w.r.t. θt, for every arm a ∈ [K], as : W̃t(a) := argminW∈Gt xt(a) ⊤Wθt. (9) Intuitively, for the reward, we want an upper confidence bound and for the consumption we want a lower confidence bound as an optimistic estimate. This intuition aligns with the above definitions, where the maximizer was used in case of reward and a minimizer was used for consumption. The utility and precise meaning of θt will become clearer when we describe the algorithm and present the regret analysis. Using the definition of µ̃t, W̃t, along with the results in Lemma 2 and Corollary 1 about confidence ellipsoids, the following can be derived. Corollary 2. With probability 1− δ, for any sequence of θ1,θ2, . . . ,θT , 1. xt(a) ⊤µ̃t(a) ≥ xt(a)⊤µ∗, for all arms a ∈ [K], for all time t. 2. xt(a) ⊤W̃t(a)θt ≤ xt(a)⊤W∗θt, for all arms a ∈ [K], for all time t. 3. | ∑T t=1(µ̃t(at)− µ∗)⊤xt(at)| ≤ ( 2m √ T log ((1+tm)/δ) log(T ) ) . 4. ‖ ∑T t=1(W̃t(at)−W∗)⊤xt(at)‖ ≤ ‖1d‖ ( 2m √ T log ((d+tmd)/δ) log(T ) ) . Essentially, the first two claims ensure that we have optimistic estimates, and the last two claims ensure that the estimates quickly converge to the true parameters. 3.2 The core algorithm In this section, we present an algorithm and its analysis, under the assumption that a parameter Z satisfying certain properties is given. Later, we show how to use the first T0 rounds to compute such a Z, and also bound the additional regret due to these T0 rounds. We define Z now. Assumption 1. Let Z be such that for some universal constants c, c′, OPTB ≤ Z ≤ cOPTB + c′. The algorithm constructs estimates µ̂t and Ŵt as in Section 3.1. It also runs the OL algorithm for an instance of the online learning problem. The vector played by the OL algorithm in time step t is θt. After observing the context, the optimistic estimates for each arm are then constructed using θt, as defined in (8) and (9). Intuitively, θt is used here as a multiplier to combine different columns of the weight matrix, to get an optimistic weight vector for every arm. An adjusted estimated reward for arm a is then defined by using Z to linearly combine the optimistic estimate of the reward with the optimistic estimate of the consumption, as (xt(a) ⊤µ̃t(a))− Z(xt(a)⊤W̃t(a)θt). The algorithm chooses the arm which appears to be the best according to the adjusted estimated reward. After observing the resulting reward and consumption vectors, the estimates are updated. The online learning algorithm is advanced by one step, by defining the profit vector to be vt(at) − BT 1. The algorithm ends either after T time steps or as soon as the total consumption exceeds the budget along some dimension. Theorem 2. Given a Z as per Assumption 1, Algorithm 1 achieves the following, with prob. 1− δ: regret(T ) ≤ O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . (Proof Sketch) We provide a sketch of the proof here, with a full proof given in Appendix E in the supplement. Let τ be the stopping time of the algorithm. The proof is in 3 steps: Step 1: Since E[vt(at)|Xt, at, Ht−1] = W⊤∗ xt(at), we apply Azuma-Hoeffding inequality to get that with high probability ∥ ∥ ∑τ t=1 vt(at)−W⊤∗ xt(at) ∥ ∥ ∞ is small. Therefore, we can work with ∑τ t=1 W ⊤ ∗ xt(at) instead of ∑τ t=1 vt(at). A similar application of Azuma-Hoeffding inequality is used to bound the gap | ∑τ t=1 rt(at) − µ⊤∗ xt(at)|, so that a lower bound on ∑τ t=1 µ ⊤ ∗ xt(at) is sufficient to lower bound the total reward ∑τ t=1 rt(at). Algorithm 1 Algorithm for linCBwK, with given Z Initialize θ1 as per the online learning (OL) algorithm. Initialize Z that satisfies Assumption 1. for all t = 1, ..., T do Observe Xt. For every a ∈ [K], compute µ̃t(a) and W̃t(a) as per (8) and (9) respectively. Play the arm at := argmaxa∈[K] xt(a) ⊤(µ̃t(a)− ZW̃t(a)θt). Observe rt(at) and vt(at). If for some j = 1..d, ∑ t′≤t vt′(at′) · ej ≥ B then EXIT. Use xt(at), rt(at) and vt(at) to obtain µ̂t+1, Ŵt+1 and Gt+1. Choose θt+1 using the OL update (refer to (5)) with gt(θt) := θt · ( vt(at)− BT 1 ) . end for Step 2: Using Corollary 2, with high probability, we can bound ∥ ∥ ∥ ∑T t=1(W∗ − W̃t(at))⊤xt(at) ∥ ∥ ∥ ∞ . It is therefore sufficient to work with the sum of vectors W̃t(at) ⊤xt(at) instead of W ⊤ ∗ xt(at), and similarly with µ̃t(at) ⊤xt(at) instead of µ ⊤ ∗ xt(at). Step 3: The proof is completed by showing the desired bound on OPT − ∑τ t=1 µ̃t(at) ⊤xt(at). This part is similar to the online stochastic packing problem; if the actual reward and consumption vectors were µ̃t(at) ⊤xt(at) and W̃t(at) ⊤xt(at), then it would be exactly that problem. We adapt techniques from [4]: use the OL algorithm and the Z parameter to combine constraints into the objective. If a dimension is being consumed too fast, then the multiplier for that dimension should increase, making the algorithm to pick arms that are not likely to consume too much along this dimension. Regret is then bounded by a combination of the online learning regret and the error in the optimistic estimates. 3.3 Algorithm with Z computation In this section, we present a modification of Algorithm 1 which computes the required parameter Z that satisfies Assumption 1, and therefore does not need to be provided with a Z as input. The algorithm computes Z using observations from the first T0 rounds. Once Z is computed, Algorithm 1 can be run for the remaining time steps. However, it needs to be modified slightly to take into account the budget consumed during the first T0 rounds. We handle this by using a smaller budget B′ = B − T0 in the computations for the remaining rounds. The modified algorithm is given below. Algorithm 2 Algorithm for linCBwK, with Z computation Inputs: B, T0, B ′ = B − T0 Using observations from first T0 rounds, compute a Z that satisfies Assumption 1. Run Algorithm 1 for T − T0 rounds and budget B′. Next, we provide the details of how to compute Z from observations in the first T0 rounds, and how to choose T0. We provide a method that takes advantage of the linear structure of the problem, and explores in the m-dimensional space of contexts and weight vectors to obtain bounds independent of K. In every round t = 1, . . . , T0, after observing Xt, let pt ∈ ∆[K] be pt := arg max p∈∆[K] ‖Xtp‖M−1t , (10) where Mt := I + ∑t−1 i=1(Xipi)(Xipi) ⊤. (11) Select arm at = a with probability pt(a). In fact, since Mt is a PSD matrix, due to convexity of the function ‖Xtp‖2M−1t , it is the same as playing at = argmaxa∈[K] ‖xt(a)‖M−1t . Construct estimates µ̂, Ŵt of µ∗,W∗ at time t as µ̂t := M −1 t ∑t−1 i=1(Xipi)ri(ai), Ŵt := M −1 t ∑t−1 i=1(Xipi)vi(ai) ⊤. And, for some value of γ defined later, obtain an estimate ˆOPT γ of OPT as: ˆOPT γ := maxπ T T0 ∑T0 i=1 µ̂ ⊤ i Xiπ(Xi) such that TT0 ∑T0 i=1 Ŵ ⊤ i Xiπ(Xi) ≤ B + γ. (12) For an intuition about the choice of arm in (10), observe from the discussion in Section 2.1 that every column w∗j of W∗ is guaranteed to lie inside the confidence ellipsoid centered at column ŵtj of Ŵt, namely the ellipsoid, ‖w − ŵtj‖2Mt ≤ 4m log(Tm/δ). Note that this ellipsoid has principle axes as eigenvectors of Mt, and the length of the semi-principle axes is given by the inverse eigenvalues of Mt. Therefore, by maximizing ‖Xtp‖M−1t we are choosing the context closest to the direction of the longest principal axis of the confidence ellipsoid, i.e. in the direction of the maximum uncertainty. Intuitively, this corresponds to pure exploration: by making an observation in the direction where uncertainty is large we can reduce the uncertainty in our estimate most effectively. A more algebraic explanation is as follows. In order to get a good estimate of OPT by ˆOPT γ , we want the estimates Ŵt and W∗ (and, µ̂ and µ∗) to be close enough so that ‖ ∑T0 t=1(Ŵt−Ŵ∗)⊤Xtπ(Xt)‖∞ (and, |∑T0t=1(µ̂t − µ∗)⊤Xtπ(Xt)|) is small for all policies π, and in particular for sample optimal policies. Now, using Cauchy-Schwartz these are bounded by ∑T0 t=1 ‖µ̂t − µ∗‖Mt‖Xtπ(Xt))‖M−1t , and ∑T0 t=1 ‖Ŵt −W∗‖Mt‖Xtπ(Xt))‖M−1t , where we define ‖W‖M , the M -norm of matrix W to be the max of column-wise M -norms. Using Lemma 2, the term ‖µ̂t−µ∗‖Mt is bounded by 2 √ m log(T0m/δ) , and ‖Ŵt−W∗‖Mt is bounded by 2 √ m log(T0md/δ), with probability 1−δ. Lemma 3 bounds the second term ∑T0 t=1 ‖Xtπ(Xt)‖M−1t but only when π is the played policy. This is where we use that the played policy pt was chosen to maximize ‖Xtpt‖M−1t , so that ∑T0 t=1 ‖Xtπ(Xt)‖M−1t ≤ ∑T0 t=1 ‖Xtpt‖M−1t and the bound ∑T0 t=1 ‖Xtpt‖M−1t ≤ √ mT0 log(T0) given by Lemma 3 actually bounds ∑T0 t=1 ‖Xtπ(Xt)‖M−1t for all π. Combining, we get a bound of 2m √ T0log(T0) log(T0d/δ) on deviations ‖ ∑T0 t=1(Ŵt − Ŵ∗) ⊤Xtπ(Xt)‖∞ and | ∑T0 t=1(µ̂t − µ∗)⊤Xtπ(Xt)| for all π. We prove the following lemma. Lemma 5. For γ = ( T T0 ) 2m √ T0log(T0) log(T0d/δ), with probability 1−O(δ), OPT − 2γ ≤ ˆOPT2γ ≤ OPT + 9γ(OPTB + 1). Corollary 3. Set Z = ( ˆOPT 2γ +2γ) B + 1, with the above value of γ. Then, with probability 1−O(δ), OPT B + 1 ≤ Z ≤ (1 + 11γ B )( OPT B + 1). Corollary 3 implies that as long as B ≥ γ, i.e., B ≥ Ω̃( mT√ T0 ), Z is a constant factor approximation of OPT B +1 ≥ Z∗, therefore Theorem 2 should provide an Õ ( (OPTB + 1)m √ T ) regret bound. However, this bound does not account for the budget consumed in the first T0 rounds. Considering that (at most) T0 amount can be consumed from the budget in the first T0 rounds, we have an additional regret of OPT B T0. Further, since we have B ′ = B − T0 budget for remaining T − T0 rounds, we need a Z that satisfies the required assumption for B′ instead of B (i.e., we need OPTB′ ≤ Z ≤ O(1) ( OPT B′ + 1 ) ). If B ≥ 2T0, then, B′ ≥ B/2, and using 2 times the Z computed in Corollary 3 would satisfy the required assumption. Together, these observations give Theorem 3. Theorem 3. Using Algorithm 2 with T0 such that B > max{2T0,mT/ √ T0}, and twice the Z given by Corollary 3, we get a high probability regret bound of Õ ( ( OPT B + 1 ) ( T0 +m √ T )) . In particular, for B > m1/2T 3/4 and m ≤ √ T , we can use T0 = m √ T to get a regret bound of Õ ( ( OPT B + 1 ) m √ T ) .
1. What is the focus and contribution of the paper on contextual bandit algorithms? 2. What are the strengths and weaknesses of the proposed algorithm, particularly regarding its design and technical aspects? 3. Do you have any concerns about the novelty and originality of the paper compared to prior works? 4. How does the reviewer assess the potential impact or usefulness of the paper's content? 5. Are there any suggestions for improving the clarity and presentation of the paper?
Review
Review This paper proposes a contextual bandit algorithm with knapsacks where the rewards and costs are linear functions of the feature vector, which is associated with each action and changes over time. The authors bound the regret of the proposed algorithm and compare it to the existing work. The paper is well written. The design of the algorithm could be motivated and explained better. The paper would be stronger if the authors evaluated their algorithm empirically.*** Technical quality *** The paper is technical. My main comment is that the design of the proposed algorithm is not explained and justified well. In particular, the algorithm combines the ideas from both stochastic learning, such as the confidence intervals on \mu and W, and adversarial learning, the selection of \theta. At first, this raises eyebrows but I think that I understand. What I do not understand is why the learning of \theta is an adversarial problem. Can you elaborate on this? Since the learning problem is stochastic, I would think that it is possible to derive good \theta_t from the optimistic estimates of \mu and W, as in (4). *** Novelty / originality *** This paper proposes a new contextual bandit algorithm with knapsacks. The authors clearly argue that their regret bounds are better by instantiating more general results in their setting, such as contextual bandits with context-dependent policies. *** Potential impact or usefulness *** The impact of this paper is reduced because this is not the first paper on contextual bandits with knapsacks. Although the proposed algorithm achieves lower regret in theory, it may less practical because it requires an additional training period for estimating Z. Moreover, linear generalization is rarely good in practice and it is not clear what the sensitivity of the algorithm is to imperfect generalization. This can be typically only established by numerical experiments. I suggest that the authors evaluate their algorithm empirically and compare it to other bandit algorithms with knapsacks, both with context and without. *** Clarity and presentation *** The overall idea of this paper is clear and it is relatively easy to read. My detailed comments are below: Length: The main paper is 10 pages long. Therefore, it violates submission guidelines. Theorem 1 and line 94: I assume that both "log" and "ln" denote natural logarithms. Choose one and stick to it. Line 213: \tilde{\mu} should be \mu_\ast. Assumption 1: Say that the role of Z is to adjust costs such that they are comparable to rewards. Conclusions: Surprisingly none.
NIPS
Title Linear Contextual Bandits with Knapsacks Abstract We consider the linear contextual bandit problem with resource consumption, in addition to reward generation. In each round, the outcome of pulling an arm is a reward as well as a vector of resource consumptions. The expected values of these outcomes depend linearly on the context of that arm. The budget/capacity constraints require that the total consumption doesn’t exceed the budget for each resource. The objective is once again to maximize the total reward. This problem turns out to be a common generalization of classic linear contextual bandits (linContextual) [8, 11, 1], bandits with knapsacks (BwK) [3, 9], and the online stochastic packing problem (OSPP) [4, 14]. We present algorithms with near-optimal regret bounds for this problem. Our bounds compare favorably to results on the unstructured version of the problem [5, 10] where the relation between the contexts and the outcomes could be arbitrary, but the algorithm only competes against a fixed set of policies accessible through an optimization oracle. We combine techniques from the work on linContextual, BwK and OSPP in a nontrivial manner while also tackling new difficulties that are not present in any of these special cases. 1 Introduction In the contextual bandit problem [8, 2], the decision maker observes a sequence of contexts (or features). In every round she needs to pull one out of K arms, after observing the context for that round. The outcome of pulling an arm may be used along with the contexts to decide future arms. Contextual bandit problems have found many useful applications such as online recommendation systems, online advertising, and clinical trials, where the decision in every round needs to be customized to the features of the user being served. The linear contextual bandit problem [1, 8, 11] is a special case of the contextual bandit problem, where the outcome is linear in the feature vector encoding the context. As pointed by [2], contextual bandit problems represent a natural half-way point between supervised learning and reinforcement learning: the use of features to encode contexts and the models for the relation between these feature vectors and the outcome are often inherited from supervised learning, while managing the exploration-exploitation tradeoff is necessary to ensure good performance in reinforcement learning. The linear contextual bandit problem can thus be thought of as a midway between the linear regression model of supervised learning, and reinforcement learning. Recently, there has been a significant interest in introducing multiple “global constraints” in the standard bandit setting [9, 3, 10, 5]. Such constraints are crucial for many important real-world applications. For example, in clinical trials, the treatment plans may be constrained by the total availability of medical facilities, drugs and other resources. In online advertising, there are budget constraints that restrict the number of times an ad is shown. Other applications include dynamic pricing, dynamic procurement, crowdsourcing, etc.; see [9, 3] for many such examples. In this paper, we consider the linear contextual bandit with knapsacks (henceforth, linCBwK) problem. In this problem, the context vectors are generated i.i.d. in every round from some unknown distribution, and on picking an arm, a reward and a consumption vector is observed, which depend ∗Columbia University. [email protected]. †Microsoft Research. [email protected]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. linearly on the context vector. The aim of the decision maker is to maximize the total reward while ensuring that the total consumption of every resource remains within a given budget. Below, we give a more precise definition of this problem. We use the following notational convention throughout: vectors are denoted by bold face lower case letters, while matrices are denoted by regular face upper case letters. Other quantities such as sets, scalars, etc. may be of either case, but never bold faced. All vectors are column vectors, i.e., a vector in n dimensions is treated as an n× 1 matrix. The transpose of matrix A is A⊤. Definition 1 (linCBwK). There are K “arms”, which we identify with the set [K]. The algorithm is initially given as input a budget B ∈ R+. In every round t, the algorithm first observes context xt(a) ∈ [0, 1]m for every arm a, and then chooses an arm at ∈ [K], and finally observes a reward rt(at) ∈ [0, 1] and a d-dimensional consumption vector vt(at) ∈ [0, 1]d. The algorithm has a “no-op” option, which is to pick none of the arms and get 0 reward and 0 consumption. The goal of the algorithm is to pick arms such that the total reward ∑T t=1 rt(at) is maximized, while ensuring that the total consumption does not exceed the budget, i.e., ∑ t vt(at) ≤ B1. We make the following stochastic assumption for context, reward, and consumption vectors. In every round t, the tuple {xt(a), rt(a),vt(a)}Ka=1 is generated from an unknown distribution D, independent of everything in previous rounds. Also, there exists an unknown vector µ∗ ∈ [0, 1]m and a matrix W∗ ∈ [0, 1]m×d such that for every arm a, given contexts xt(a), and history Ht−1 before time t, E[rt(a)|xt(a), Ht−1] = µ⊤∗ xt(a), E[vt(a)|xt(a), Ht−1] = W⊤∗ xt(a). (1) For succinctness, we will denote the tuple of contexts for K arms at time t as matrix Xt ∈ [0, 1]m×K , with xt(a) being the a th column of this matrix. Similarly, rewards and consumption vectors at time t are represented as the vector rt ∈ [0, 1]K and the matrix Vt ∈ [0, 1]d×K respectively. As we discuss later in the text, the assumption in equation (1) forms the primary distinction between our linear contextual bandit setting and the general contextual bandit setting considered in [5]. Exploiting this linearity assumption will allow us to generate regret bounds which do not depend on the number of arms K, rendering it to be especially useful when the number of arms is large. Some examples of this include recommendation systems with large number of products (e.g., retail products, travel packages, ad creatives, sponsored facebook posts). Another advantage over using the general contextual bandit setting of [5] is that we don’t need an oracle access to a certain optimization problem, which in this case is required to solve an NP-Hard problem. (See Section 1.1 for a more detailed discussion.) We compare the performance of an algorithm to that of an optimal adaptive policy that knows the distribution D and the parameters (µ∗,W∗), and can take into account the history up to that point, as well as the current context, to decide (possibly with randomization) which arm to pull at time t. However, it is easier to work with an upper bound on this, which is the optimal expected reward of a static policy that is required to satisfy the constraints only in expectation. This technique has been used in several related problems and is standard by now [14, 9]. Definition 2 (Optimal Static Policy). A context-dependent non-adaptive policy π is a mapping from context space [0, 1]m×K to Ω = {p ∈ [0, 1]K : ‖p‖1 ≤ 1}, where π(X)i denotes the probability of playing arm i when the context is X , and 1−∑Ki=1 π(X)i is the probability of no-op. Define r(π) and v(π) to be the expected reward and consumption vector of policy π, respectively, i.e. r(π) := E(X,r,V )∼D[rπ(X)] = EX∼D[µ ⊤ ∗ Xπ(X)]. (2) v(π) := E(X,r,V )∼D[V π(X)] = EX∼D[W ⊤ ∗ Xπ(X)]. (3) Let π∗ := argmaxπ T r(π) such that T v(π) ≤ B1 (4) be the optimal static policy. Note that since no-op is allowed, a feasible policy always exists. We denote the value of this optimal static policy by OPT := T r(π∗). The following lemma proves that OPT upper bounds the value of an optimal adaptive policy. Proof is in Appendix B in the supplement. Lemma 1. Let OPT denote the value of an optimal adaptive policy that knows the distribution D and parameters µ∗,W∗. Then OPT ≥ OPT. Definition 3 (Regret). Let at be the arm played at time t by the algorithm. Then, regret is defined as regret(T ) := OPT − T ∑ t=1 rt(at). 1.1 Main results Our main result is an algorithm with near-optimal regret bound for linCBwK. Theorem 1. There is an algorithm for linCBwK such that if B > m1/2T 3/4, then with probability at least 1− δ, regret(T ) = O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . Relation to general contextual bandits. There have been recent papers [5, 10] that solve problems similar to linCBwK but for general contextual bandits. In these papers the relation between contexts and outcome vectors is arbitrary and the algorithms compete with an arbitrary fixed set of context dependent policies Π accessible via an optimization oracle, with regret bounds being O ( (OPTB + 1) √ KT log(dT |Π|/δ) ) . These approaches could potentially be applied to the linear setting using a set Π of linear context dependent policies. Comparing their bounds with ours, in our results, essentially a √ K log(|Π|) factor is replaced by a factor of m. Most importantly, we have no dependence on K,3 which enables us to consider problems with large action spaces. Further, suppose that we want to use their result with the set of linear policies, i.e., policies of the form, for some fixed θ ∈ ℜm, arg max a∈[K] {xt(a)⊤θ}. Then, their algorithms would require access to an “Arg-Max Oracle” that can find the best such policy (maximizing total reward) for a given set of contexts and rewards (no resource consumption). In fact, by a reduction from the problem of learning halfspaces with noise [16], we can show that the optimization problem underlying such an “Arg-Max Oracle” problem is NP-Hard, making such an approach computationally expensive. The proof of this is in Appendix C in the supplement. The only downside to our results is that we need the budget B to be Ω(m1/2T 3/4). Getting similar bounds for budgets as small as B = Θ(m √ T ) is an interesting open problem. (This also indicates that this is indeed a harder problem than all the special cases.) Near-optimality of regret bounds. In [12], it was shown that for the linear contextual bandits problem, no online algorithm can achieve a regret bound better than Ω(m √ T ). In fact, they prove this lower bound for linear contextual bandits with static contexts. Since that problem is a special case of the linCBwK problem with d = 1, this shows that the dependence on m and T in the above regret bound is optimal upto log factors. For general contextual bandits with resource constraints, the bounds of [5, 10] are near optimal. Relation to BwK [3] and OSPP [4]. It is easy to see that the linCBwK problem is a generalization of the linear contextual bandits problem [1, 8, 11]. There, the outcome is scalar and the goal is to simply maximize the sum of these. Remarkably, the linCBwK problem also turns out to be a common generalization of the bandits with knapsacks (BwK) problem considered in [9, 3], and the online stochastic packing problem (OSPP) studied by [13, 6, 15, 14, 4]. In both BwK and OSPP, the outcome of every round t is a reward rt and a vector vt and the goal of the algorithm is to maximize ∑T t=1 rt while ensuring that ∑T t=1 vt ≤ B1. The problems differ in how these rewards and vectors are picked. In the OSPP problem, in every round t, the algorithm may pick any reward,vector pair from a given set At of d + 1-dimensional vectors. The set At is drawn i.i.d. from an unknown distribution over sets of vectors. This corresponds to the special case of linCBwK, where m = d+ 1 and the context xt(a) itself is equal to (rt(a),vt(a)). In the BwK problem, there is a fixed set of arms, and for each arm there is an unknown distribution over reward,vector pairs. The algorithm picks an arm and a reward,vector pair is drawn from the corresponding distribution for that arm. This 3Similar to the regret bounds for linear contextual bandits [8, 1, 11]. corresponds to the special case of linCBwK, where m = K and the context Xt = I, the identity matrix, for all t. We use techniques from all three special cases: our algorithms follow the primal-dual paradigm and use an online learning algorithm to search the dual space, as was done in [3]. In order to deal with linear contexts, we use techniques from [1, 8, 11] to estimate the weight matrix W∗, and define “optimistic estimates” of W∗. We also use the technique of combining the objective and the constraints using a certain tradeoff parameter and that was introduced in [4]. Further new difficulties arise, such as in estimating the optimum value from the first few rounds, a task that follows from standard techniques in each of the special cases but is very challenging here. We develop a new way of exploration that uses the linear structure, so that one can evaluate all possible choices that could have led to an optimum solution on the historic sample. This technique might be of independent interest in estimating optimum values. One can see that the problem is indeed more than the sum of its parts, from the fact that we get the optimal bound for linCBwK only when B ≥ Ω̃(m1/2T 3/4), unlike either special case for which the optimal bound holds for all B (but is meaningful only for B = Ω̃(m √ T )). The approach in [3] (for BwK) extends to the case of “static” contexts,4 where each arm has a context that doesn’t change over time. The OSPP of [4] is not a special case of linCBwK with static contexts; this is one indication of the additional difficulty of dynamic over static contexts. Other related work. Recently, [17] showed an O( √ T ) regret in the linear contextual setting with a single budget constraint, when costs depend only on contexts and not arms. Due to space constraints, we have moved many proofs from the main part of the paper to the supplement. 2 Preliminaries 2.1 Confidence Ellipsoid Consider a stochastic process which in each round t, generates a pair of observations (rt,yt), such that rt is an unknown linear function of yt plus some 0-mean bounded noise, i.e., rt = µ ⊤ ∗ yt + ηt, where yt,µ∗ ∈ Rm, |ηt| ≤ 2R, and E[ηt|y1, r1, . . . ,yt−1, rt−1,yt] = 0. At any time t, a high confidence estimate of the unknown vector µ∗ can be obtained by building a “confidence ellipsoid” around the ℓ2-regularized least-squares estimate µ̂t constructed from the observations made so far. This technique is common in prior work on linear contextual bandits (e.g., in [8, 11, 1]). For any regularization parameter λ > 0, let Mt := λI + ∑t−1 i=1 yiy ⊤ i , and µ̂t := M −1 t ∑t−1 i=1 yiri. The following result from [1] shows that µ∗ lies with high probability in an ellipsoid with center µ̂t. For any positive semi-definite (PSD) matrix M, define the M -norm as ‖µ‖M := √ µ⊤Mµ. The confidence ellipsoid at time t is defined as Ct := { µ ∈ Rm : ‖µ− µ̂t‖Mt ≤ R √ m log ((1+tm/λ)/δ) + √ λm } . Lemma 2 (Theorem 2 of [1]). If ∀ t, ‖µ∗‖2 ≤ √ m and ‖yt‖2 ≤ √ m, then with prob. 1 − δ, µ∗ ∈ Ct. Another useful observation about this construction is stated below. It first appeared as Lemma 11 of [8], and was also proved as Lemma 3 in [11]. Lemma 3 (Lemma 11 of [8]). ∑T t=1 ‖yt‖M−1t ≤ √ mT log(T ). As a corollary of the above two lemmas, we obtain a bound on the total error in the estimate provided by “any point” from the confidence ellipsoid. (Proof is in Appendix D in the supplement.) 4It was incorrectly claimed in [3] that the approach can be extended to dynamic contexts without much modifications. Corollary 1. For t = 1, . . . , T , let µ̃t ∈ Ct be a point in the confidence ellipsoid, with λ = 1 and 2R = 1. Then, with probability 1− δ, ∑T t=1 |µ̃⊤t yt − µ⊤∗ yt| ≤ 2m √ T log ((1+Tm)/δ) log(T ). 2.2 Online Learning Consider a T round game played between an online learner and an adversary, where in round t, the learner chooses a θt ∈ Ω := {θ : ‖θ‖1 ≤ 1,θ ≥ 0}, and then observes a linear function gt : Ω → [−1, 1] picked by the adversary. The learner’s choice θt may only depend on learner’s and adversary’s choices in previous rounds. The goal of the learner is to minimize regret defined as the difference between the learner’s objective value and the value of the best single choice in hindsight: R(T ) := maxθ∈Ω ∑T t=1 gt(θ)− ∑T t=1 gt(θt). The multiplicative weight update (MWU) algorithm (generalization by [7]) is a fast and efficient online learning algorithm for this problem. Let gt,j := gt(1j). Then, given a parameter ǫ > 0, in round t+ 1, the choice of this algorithm takes the following form, θt+1,j = wt,j 1 + ∑ j wt,j , where wt,j = { wt−1,j(1 + ǫ) gt,j if gt,j > 0, wt−1,j(1− ǫ)−gt,j if gt,j ≤ 0. (5) with initialization w0,j = 1, for all j = 1, . . . ,K. Lemma 4. [7] For any 0 < ǫ ≤ 12 , the MWU algorithm provides the following regret bound for the online learning problem described above: R(T ) ≤ ǫT + log(d+1)ǫ . In particular, for ǫ = √ log(d+1) T , we have R(T ) ≤ √ log(d+ 1)T For the rest of the paper, we refer to the MWU algorithm with ǫ = √ log(d+1) T as the online learning (OL) algorithm, and the update in (5) as the OL update at time t+ 1. 3 Algorithm 3.1 Optimistic estimates of unknown parameters Let at denote the arm played by the algorithm at time t. In the beginning of every round, we use the outcomes and contexts from previous rounds to construct a confidence ellipsoid for µ∗ and every column of W∗. The construction of confidence ellipsoid for µ∗ follows directly from the techniques in Section 2.1 with yt = xt(at) and rt being reward at time t. To construct a confidence ellipsoid for a column j of W∗, we use the techniques in Section 2.1 while substituting yt = xt(at) and rt = vt(at)j for every j. As in Section 2.1, let Mt := I + ∑t−1 i=1 xi(ai)xi(ai) ⊤, and construct the regularized least squares estimate for µ∗,W∗, respectively, as µ̂t := M −1 t ∑t−1 i=1 xi(ai)ri(ai) ⊤ (6) Ŵt := M −1 t ∑t−1 i=1 xi(ai)vi(ai) ⊤. (7) Define confidence ellipsoid for parameter µ∗ as Ct,0 := { µ ∈ Rm : ‖µ− µ̂‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and for every arm a, the optimistic estimate of µ∗ as: µ̃t(a) := argmaxµ∈Ct,0 xt(a) ⊤µ. (8) Let wj denote the j th column of a matrix W . We define a confidence ellipsoid for each column j, as Ct,j := { w ∈ Rm : ‖w − ŵtj‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and denote by Gt, the Cartesian product of all these ellipsoids: Gt := {W ∈ Rm×d : wj ∈ Ct,j}. Note that Lemma 2 implies that W∗ ∈ Gt with probability 1− δ. Now, given a vector θt ∈ Rd, we define the optimistic estimate of the weight matrix at time t w.r.t. θt, for every arm a ∈ [K], as : W̃t(a) := argminW∈Gt xt(a) ⊤Wθt. (9) Intuitively, for the reward, we want an upper confidence bound and for the consumption we want a lower confidence bound as an optimistic estimate. This intuition aligns with the above definitions, where the maximizer was used in case of reward and a minimizer was used for consumption. The utility and precise meaning of θt will become clearer when we describe the algorithm and present the regret analysis. Using the definition of µ̃t, W̃t, along with the results in Lemma 2 and Corollary 1 about confidence ellipsoids, the following can be derived. Corollary 2. With probability 1− δ, for any sequence of θ1,θ2, . . . ,θT , 1. xt(a) ⊤µ̃t(a) ≥ xt(a)⊤µ∗, for all arms a ∈ [K], for all time t. 2. xt(a) ⊤W̃t(a)θt ≤ xt(a)⊤W∗θt, for all arms a ∈ [K], for all time t. 3. | ∑T t=1(µ̃t(at)− µ∗)⊤xt(at)| ≤ ( 2m √ T log ((1+tm)/δ) log(T ) ) . 4. ‖ ∑T t=1(W̃t(at)−W∗)⊤xt(at)‖ ≤ ‖1d‖ ( 2m √ T log ((d+tmd)/δ) log(T ) ) . Essentially, the first two claims ensure that we have optimistic estimates, and the last two claims ensure that the estimates quickly converge to the true parameters. 3.2 The core algorithm In this section, we present an algorithm and its analysis, under the assumption that a parameter Z satisfying certain properties is given. Later, we show how to use the first T0 rounds to compute such a Z, and also bound the additional regret due to these T0 rounds. We define Z now. Assumption 1. Let Z be such that for some universal constants c, c′, OPTB ≤ Z ≤ cOPTB + c′. The algorithm constructs estimates µ̂t and Ŵt as in Section 3.1. It also runs the OL algorithm for an instance of the online learning problem. The vector played by the OL algorithm in time step t is θt. After observing the context, the optimistic estimates for each arm are then constructed using θt, as defined in (8) and (9). Intuitively, θt is used here as a multiplier to combine different columns of the weight matrix, to get an optimistic weight vector for every arm. An adjusted estimated reward for arm a is then defined by using Z to linearly combine the optimistic estimate of the reward with the optimistic estimate of the consumption, as (xt(a) ⊤µ̃t(a))− Z(xt(a)⊤W̃t(a)θt). The algorithm chooses the arm which appears to be the best according to the adjusted estimated reward. After observing the resulting reward and consumption vectors, the estimates are updated. The online learning algorithm is advanced by one step, by defining the profit vector to be vt(at) − BT 1. The algorithm ends either after T time steps or as soon as the total consumption exceeds the budget along some dimension. Theorem 2. Given a Z as per Assumption 1, Algorithm 1 achieves the following, with prob. 1− δ: regret(T ) ≤ O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . (Proof Sketch) We provide a sketch of the proof here, with a full proof given in Appendix E in the supplement. Let τ be the stopping time of the algorithm. The proof is in 3 steps: Step 1: Since E[vt(at)|Xt, at, Ht−1] = W⊤∗ xt(at), we apply Azuma-Hoeffding inequality to get that with high probability ∥ ∥ ∑τ t=1 vt(at)−W⊤∗ xt(at) ∥ ∥ ∞ is small. Therefore, we can work with ∑τ t=1 W ⊤ ∗ xt(at) instead of ∑τ t=1 vt(at). A similar application of Azuma-Hoeffding inequality is used to bound the gap | ∑τ t=1 rt(at) − µ⊤∗ xt(at)|, so that a lower bound on ∑τ t=1 µ ⊤ ∗ xt(at) is sufficient to lower bound the total reward ∑τ t=1 rt(at). Algorithm 1 Algorithm for linCBwK, with given Z Initialize θ1 as per the online learning (OL) algorithm. Initialize Z that satisfies Assumption 1. for all t = 1, ..., T do Observe Xt. For every a ∈ [K], compute µ̃t(a) and W̃t(a) as per (8) and (9) respectively. Play the arm at := argmaxa∈[K] xt(a) ⊤(µ̃t(a)− ZW̃t(a)θt). Observe rt(at) and vt(at). If for some j = 1..d, ∑ t′≤t vt′(at′) · ej ≥ B then EXIT. Use xt(at), rt(at) and vt(at) to obtain µ̂t+1, Ŵt+1 and Gt+1. Choose θt+1 using the OL update (refer to (5)) with gt(θt) := θt · ( vt(at)− BT 1 ) . end for Step 2: Using Corollary 2, with high probability, we can bound ∥ ∥ ∥ ∑T t=1(W∗ − W̃t(at))⊤xt(at) ∥ ∥ ∥ ∞ . It is therefore sufficient to work with the sum of vectors W̃t(at) ⊤xt(at) instead of W ⊤ ∗ xt(at), and similarly with µ̃t(at) ⊤xt(at) instead of µ ⊤ ∗ xt(at). Step 3: The proof is completed by showing the desired bound on OPT − ∑τ t=1 µ̃t(at) ⊤xt(at). This part is similar to the online stochastic packing problem; if the actual reward and consumption vectors were µ̃t(at) ⊤xt(at) and W̃t(at) ⊤xt(at), then it would be exactly that problem. We adapt techniques from [4]: use the OL algorithm and the Z parameter to combine constraints into the objective. If a dimension is being consumed too fast, then the multiplier for that dimension should increase, making the algorithm to pick arms that are not likely to consume too much along this dimension. Regret is then bounded by a combination of the online learning regret and the error in the optimistic estimates. 3.3 Algorithm with Z computation In this section, we present a modification of Algorithm 1 which computes the required parameter Z that satisfies Assumption 1, and therefore does not need to be provided with a Z as input. The algorithm computes Z using observations from the first T0 rounds. Once Z is computed, Algorithm 1 can be run for the remaining time steps. However, it needs to be modified slightly to take into account the budget consumed during the first T0 rounds. We handle this by using a smaller budget B′ = B − T0 in the computations for the remaining rounds. The modified algorithm is given below. Algorithm 2 Algorithm for linCBwK, with Z computation Inputs: B, T0, B ′ = B − T0 Using observations from first T0 rounds, compute a Z that satisfies Assumption 1. Run Algorithm 1 for T − T0 rounds and budget B′. Next, we provide the details of how to compute Z from observations in the first T0 rounds, and how to choose T0. We provide a method that takes advantage of the linear structure of the problem, and explores in the m-dimensional space of contexts and weight vectors to obtain bounds independent of K. In every round t = 1, . . . , T0, after observing Xt, let pt ∈ ∆[K] be pt := arg max p∈∆[K] ‖Xtp‖M−1t , (10) where Mt := I + ∑t−1 i=1(Xipi)(Xipi) ⊤. (11) Select arm at = a with probability pt(a). In fact, since Mt is a PSD matrix, due to convexity of the function ‖Xtp‖2M−1t , it is the same as playing at = argmaxa∈[K] ‖xt(a)‖M−1t . Construct estimates µ̂, Ŵt of µ∗,W∗ at time t as µ̂t := M −1 t ∑t−1 i=1(Xipi)ri(ai), Ŵt := M −1 t ∑t−1 i=1(Xipi)vi(ai) ⊤. And, for some value of γ defined later, obtain an estimate ˆOPT γ of OPT as: ˆOPT γ := maxπ T T0 ∑T0 i=1 µ̂ ⊤ i Xiπ(Xi) such that TT0 ∑T0 i=1 Ŵ ⊤ i Xiπ(Xi) ≤ B + γ. (12) For an intuition about the choice of arm in (10), observe from the discussion in Section 2.1 that every column w∗j of W∗ is guaranteed to lie inside the confidence ellipsoid centered at column ŵtj of Ŵt, namely the ellipsoid, ‖w − ŵtj‖2Mt ≤ 4m log(Tm/δ). Note that this ellipsoid has principle axes as eigenvectors of Mt, and the length of the semi-principle axes is given by the inverse eigenvalues of Mt. Therefore, by maximizing ‖Xtp‖M−1t we are choosing the context closest to the direction of the longest principal axis of the confidence ellipsoid, i.e. in the direction of the maximum uncertainty. Intuitively, this corresponds to pure exploration: by making an observation in the direction where uncertainty is large we can reduce the uncertainty in our estimate most effectively. A more algebraic explanation is as follows. In order to get a good estimate of OPT by ˆOPT γ , we want the estimates Ŵt and W∗ (and, µ̂ and µ∗) to be close enough so that ‖ ∑T0 t=1(Ŵt−Ŵ∗)⊤Xtπ(Xt)‖∞ (and, |∑T0t=1(µ̂t − µ∗)⊤Xtπ(Xt)|) is small for all policies π, and in particular for sample optimal policies. Now, using Cauchy-Schwartz these are bounded by ∑T0 t=1 ‖µ̂t − µ∗‖Mt‖Xtπ(Xt))‖M−1t , and ∑T0 t=1 ‖Ŵt −W∗‖Mt‖Xtπ(Xt))‖M−1t , where we define ‖W‖M , the M -norm of matrix W to be the max of column-wise M -norms. Using Lemma 2, the term ‖µ̂t−µ∗‖Mt is bounded by 2 √ m log(T0m/δ) , and ‖Ŵt−W∗‖Mt is bounded by 2 √ m log(T0md/δ), with probability 1−δ. Lemma 3 bounds the second term ∑T0 t=1 ‖Xtπ(Xt)‖M−1t but only when π is the played policy. This is where we use that the played policy pt was chosen to maximize ‖Xtpt‖M−1t , so that ∑T0 t=1 ‖Xtπ(Xt)‖M−1t ≤ ∑T0 t=1 ‖Xtpt‖M−1t and the bound ∑T0 t=1 ‖Xtpt‖M−1t ≤ √ mT0 log(T0) given by Lemma 3 actually bounds ∑T0 t=1 ‖Xtπ(Xt)‖M−1t for all π. Combining, we get a bound of 2m √ T0log(T0) log(T0d/δ) on deviations ‖ ∑T0 t=1(Ŵt − Ŵ∗) ⊤Xtπ(Xt)‖∞ and | ∑T0 t=1(µ̂t − µ∗)⊤Xtπ(Xt)| for all π. We prove the following lemma. Lemma 5. For γ = ( T T0 ) 2m √ T0log(T0) log(T0d/δ), with probability 1−O(δ), OPT − 2γ ≤ ˆOPT2γ ≤ OPT + 9γ(OPTB + 1). Corollary 3. Set Z = ( ˆOPT 2γ +2γ) B + 1, with the above value of γ. Then, with probability 1−O(δ), OPT B + 1 ≤ Z ≤ (1 + 11γ B )( OPT B + 1). Corollary 3 implies that as long as B ≥ γ, i.e., B ≥ Ω̃( mT√ T0 ), Z is a constant factor approximation of OPT B +1 ≥ Z∗, therefore Theorem 2 should provide an Õ ( (OPTB + 1)m √ T ) regret bound. However, this bound does not account for the budget consumed in the first T0 rounds. Considering that (at most) T0 amount can be consumed from the budget in the first T0 rounds, we have an additional regret of OPT B T0. Further, since we have B ′ = B − T0 budget for remaining T − T0 rounds, we need a Z that satisfies the required assumption for B′ instead of B (i.e., we need OPTB′ ≤ Z ≤ O(1) ( OPT B′ + 1 ) ). If B ≥ 2T0, then, B′ ≥ B/2, and using 2 times the Z computed in Corollary 3 would satisfy the required assumption. Together, these observations give Theorem 3. Theorem 3. Using Algorithm 2 with T0 such that B > max{2T0,mT/ √ T0}, and twice the Z given by Corollary 3, we get a high probability regret bound of Õ ( ( OPT B + 1 ) ( T0 +m √ T )) . In particular, for B > m1/2T 3/4 and m ≤ √ T , we can use T0 = m √ T to get a regret bound of Õ ( ( OPT B + 1 ) m √ T ) .
1. What is the focus and contribution of the paper regarding linear contextual bandits? 2. What are the strengths of the proposed algorithm and analysis, particularly in terms of regret bound? 3. What are the weaknesses of the paper, especially regarding writing quality, typos, undefined notation, and some parts of the proof that are hard to understand? 4. How practical is the main algorithm, which relies on an estimate of the payoff for the optimal policy, and how long does the horizon need to be before real learning occurs? 5. Is the paper overlength, and should the appendix be submitted as supplementary material and the reference list cut down?
Review
Review ********************************************************* POST REBUTTAL: Thanks for the corrections. I now vote for acceptance. I hope the authors will spend the effort to improve the readability. ********************************************************* A new linear contextual bandit setting is proposed that includes the addition of a d-dimensional budget. This generalises existing budgeted bandits to the linear case. Everything is assumed to be i.i.d., and the budget usage of an action is assumed to be a linear function of the context for that action and some unknown parameter. Besides the setting (which is quite natural) the main contribution is a new algorithm and analysis showing that ignoring logarithmic factors enjoys a regret of O((1+OPT/B) m Sqrt(T)) where m is the dimension of the action space, B is the budget and OPT is the payoff of the optimal static strategy. Although there are no lower bounds, by combining existing lower bounds the authors argue that this is not much improvable, which is quite convincing.The algorithm operates mostly in a quite standard way. Constructing confidence ellipsoids around the unknown parameters and acting optimistically subject to a budget penalty. The latter part is the most interesting because one has to try and learn the direction in the budget space for which the algorithm is most constrained, for which the authors use online mirror descent. As far as I know this is a completely novel idea. I found a lot to like in this paper. The setting and algorithm are interesting and the analysis is mostly quite good. There were also some weaknesses, however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which I mark with ** in the minor comments below. Hopefully I am missing something silly. One also has to wonder about the practicality of such algorithms. The main algorithm relies on an estimate of the payoff for the optimal policy, which can be learnt with sufficient precision in a "short" initialisation period. Some synthetic experiments might shed some light on how long the horizon needs to be before any real learning occurs. A final note. The paper is over length. Up to the two pages of references it is 10 pages, but only 9 are allowed. The appendix should have been submitted as supplementary material and the reference list cut down. Despite the weaknesses I am quite positive about this paper, although it could certainly use quite a lot of polishing. I will raise my score once the ** points are addressed in the rebuttal. Minor comments: * L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0? * L177: "(OCO )" -> "(OCO)" and similar things elsewhere * L176: You might want to mention that the learner observes the whole concave function (full information setting) * L223: I would prefer to see a constant here. What does the O(.) really mean here? * L240 and L428: "is sufficient" for what? I guess you want to write that the sum of the "optimistic" hoped for rewards is close to the expected actual rewards. * L384: Could mention that you mean |Y_t - Y_{t-1}| \leq c_t almost surely. ** L431: \mu_t should be \tilde \mu_t, yes? * The algorithm only stops /after/ it has exhausted its budget. Don't you need to stop just before? (the regret is only trivially affected, so this isn't too important). * L213: \tilde \mu is undefined. I guess you mean \tilde \mu_t, but that is also not defined except in Corollary 1, where it just given as some point in the confidence ellipsoid in round t. The result holds for all points in the ellipsoid uniformly with time, so maybe just write that, or at least clarify somehow. ** L435: I do not see how this follows from Corollary 2 (I guess you meant part 1, please say so). So first of all mu_t(a_t) is not defined. Did you mean tilde mu_t(a_t)? But still I don't understand. pi^*(X_t) is (possibly random) optimal static strategy while \tilde \mu_t(a_t) is the optimistic mu for action a_t, which may not be optimistic for pi^*(X_t)? I have similar concerns about the claim on the use of budget as well. * L434: The \hat v^*_t seems like strange notation. Elsewhere the \hat is used for empirical estimates (as is standard), but here it refers to something else. * L178: Why not say what Omega is here. Also, OMD is a whole family of algorithms. It might be nice to be more explicit. What link function? Which theorem in [32] are you referring to for this regret guarantee? * L200: "for every arm a" implies there is a single optimistic parameter, but of course it depends on a ** L303: Why not choose T_0 = m Sqrt(T)? Then the condition becomes B > Sqrt(m) T^(3/4), which improves slightly on what you give. * It would be nice to have more interpretation of theta (I hope I got it right), since this is the most novel component of the proof/algorithm.
NIPS
Title Linear Contextual Bandits with Knapsacks Abstract We consider the linear contextual bandit problem with resource consumption, in addition to reward generation. In each round, the outcome of pulling an arm is a reward as well as a vector of resource consumptions. The expected values of these outcomes depend linearly on the context of that arm. The budget/capacity constraints require that the total consumption doesn’t exceed the budget for each resource. The objective is once again to maximize the total reward. This problem turns out to be a common generalization of classic linear contextual bandits (linContextual) [8, 11, 1], bandits with knapsacks (BwK) [3, 9], and the online stochastic packing problem (OSPP) [4, 14]. We present algorithms with near-optimal regret bounds for this problem. Our bounds compare favorably to results on the unstructured version of the problem [5, 10] where the relation between the contexts and the outcomes could be arbitrary, but the algorithm only competes against a fixed set of policies accessible through an optimization oracle. We combine techniques from the work on linContextual, BwK and OSPP in a nontrivial manner while also tackling new difficulties that are not present in any of these special cases. 1 Introduction In the contextual bandit problem [8, 2], the decision maker observes a sequence of contexts (or features). In every round she needs to pull one out of K arms, after observing the context for that round. The outcome of pulling an arm may be used along with the contexts to decide future arms. Contextual bandit problems have found many useful applications such as online recommendation systems, online advertising, and clinical trials, where the decision in every round needs to be customized to the features of the user being served. The linear contextual bandit problem [1, 8, 11] is a special case of the contextual bandit problem, where the outcome is linear in the feature vector encoding the context. As pointed by [2], contextual bandit problems represent a natural half-way point between supervised learning and reinforcement learning: the use of features to encode contexts and the models for the relation between these feature vectors and the outcome are often inherited from supervised learning, while managing the exploration-exploitation tradeoff is necessary to ensure good performance in reinforcement learning. The linear contextual bandit problem can thus be thought of as a midway between the linear regression model of supervised learning, and reinforcement learning. Recently, there has been a significant interest in introducing multiple “global constraints” in the standard bandit setting [9, 3, 10, 5]. Such constraints are crucial for many important real-world applications. For example, in clinical trials, the treatment plans may be constrained by the total availability of medical facilities, drugs and other resources. In online advertising, there are budget constraints that restrict the number of times an ad is shown. Other applications include dynamic pricing, dynamic procurement, crowdsourcing, etc.; see [9, 3] for many such examples. In this paper, we consider the linear contextual bandit with knapsacks (henceforth, linCBwK) problem. In this problem, the context vectors are generated i.i.d. in every round from some unknown distribution, and on picking an arm, a reward and a consumption vector is observed, which depend ∗Columbia University. [email protected]. †Microsoft Research. [email protected]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. linearly on the context vector. The aim of the decision maker is to maximize the total reward while ensuring that the total consumption of every resource remains within a given budget. Below, we give a more precise definition of this problem. We use the following notational convention throughout: vectors are denoted by bold face lower case letters, while matrices are denoted by regular face upper case letters. Other quantities such as sets, scalars, etc. may be of either case, but never bold faced. All vectors are column vectors, i.e., a vector in n dimensions is treated as an n× 1 matrix. The transpose of matrix A is A⊤. Definition 1 (linCBwK). There are K “arms”, which we identify with the set [K]. The algorithm is initially given as input a budget B ∈ R+. In every round t, the algorithm first observes context xt(a) ∈ [0, 1]m for every arm a, and then chooses an arm at ∈ [K], and finally observes a reward rt(at) ∈ [0, 1] and a d-dimensional consumption vector vt(at) ∈ [0, 1]d. The algorithm has a “no-op” option, which is to pick none of the arms and get 0 reward and 0 consumption. The goal of the algorithm is to pick arms such that the total reward ∑T t=1 rt(at) is maximized, while ensuring that the total consumption does not exceed the budget, i.e., ∑ t vt(at) ≤ B1. We make the following stochastic assumption for context, reward, and consumption vectors. In every round t, the tuple {xt(a), rt(a),vt(a)}Ka=1 is generated from an unknown distribution D, independent of everything in previous rounds. Also, there exists an unknown vector µ∗ ∈ [0, 1]m and a matrix W∗ ∈ [0, 1]m×d such that for every arm a, given contexts xt(a), and history Ht−1 before time t, E[rt(a)|xt(a), Ht−1] = µ⊤∗ xt(a), E[vt(a)|xt(a), Ht−1] = W⊤∗ xt(a). (1) For succinctness, we will denote the tuple of contexts for K arms at time t as matrix Xt ∈ [0, 1]m×K , with xt(a) being the a th column of this matrix. Similarly, rewards and consumption vectors at time t are represented as the vector rt ∈ [0, 1]K and the matrix Vt ∈ [0, 1]d×K respectively. As we discuss later in the text, the assumption in equation (1) forms the primary distinction between our linear contextual bandit setting and the general contextual bandit setting considered in [5]. Exploiting this linearity assumption will allow us to generate regret bounds which do not depend on the number of arms K, rendering it to be especially useful when the number of arms is large. Some examples of this include recommendation systems with large number of products (e.g., retail products, travel packages, ad creatives, sponsored facebook posts). Another advantage over using the general contextual bandit setting of [5] is that we don’t need an oracle access to a certain optimization problem, which in this case is required to solve an NP-Hard problem. (See Section 1.1 for a more detailed discussion.) We compare the performance of an algorithm to that of an optimal adaptive policy that knows the distribution D and the parameters (µ∗,W∗), and can take into account the history up to that point, as well as the current context, to decide (possibly with randomization) which arm to pull at time t. However, it is easier to work with an upper bound on this, which is the optimal expected reward of a static policy that is required to satisfy the constraints only in expectation. This technique has been used in several related problems and is standard by now [14, 9]. Definition 2 (Optimal Static Policy). A context-dependent non-adaptive policy π is a mapping from context space [0, 1]m×K to Ω = {p ∈ [0, 1]K : ‖p‖1 ≤ 1}, where π(X)i denotes the probability of playing arm i when the context is X , and 1−∑Ki=1 π(X)i is the probability of no-op. Define r(π) and v(π) to be the expected reward and consumption vector of policy π, respectively, i.e. r(π) := E(X,r,V )∼D[rπ(X)] = EX∼D[µ ⊤ ∗ Xπ(X)]. (2) v(π) := E(X,r,V )∼D[V π(X)] = EX∼D[W ⊤ ∗ Xπ(X)]. (3) Let π∗ := argmaxπ T r(π) such that T v(π) ≤ B1 (4) be the optimal static policy. Note that since no-op is allowed, a feasible policy always exists. We denote the value of this optimal static policy by OPT := T r(π∗). The following lemma proves that OPT upper bounds the value of an optimal adaptive policy. Proof is in Appendix B in the supplement. Lemma 1. Let OPT denote the value of an optimal adaptive policy that knows the distribution D and parameters µ∗,W∗. Then OPT ≥ OPT. Definition 3 (Regret). Let at be the arm played at time t by the algorithm. Then, regret is defined as regret(T ) := OPT − T ∑ t=1 rt(at). 1.1 Main results Our main result is an algorithm with near-optimal regret bound for linCBwK. Theorem 1. There is an algorithm for linCBwK such that if B > m1/2T 3/4, then with probability at least 1− δ, regret(T ) = O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . Relation to general contextual bandits. There have been recent papers [5, 10] that solve problems similar to linCBwK but for general contextual bandits. In these papers the relation between contexts and outcome vectors is arbitrary and the algorithms compete with an arbitrary fixed set of context dependent policies Π accessible via an optimization oracle, with regret bounds being O ( (OPTB + 1) √ KT log(dT |Π|/δ) ) . These approaches could potentially be applied to the linear setting using a set Π of linear context dependent policies. Comparing their bounds with ours, in our results, essentially a √ K log(|Π|) factor is replaced by a factor of m. Most importantly, we have no dependence on K,3 which enables us to consider problems with large action spaces. Further, suppose that we want to use their result with the set of linear policies, i.e., policies of the form, for some fixed θ ∈ ℜm, arg max a∈[K] {xt(a)⊤θ}. Then, their algorithms would require access to an “Arg-Max Oracle” that can find the best such policy (maximizing total reward) for a given set of contexts and rewards (no resource consumption). In fact, by a reduction from the problem of learning halfspaces with noise [16], we can show that the optimization problem underlying such an “Arg-Max Oracle” problem is NP-Hard, making such an approach computationally expensive. The proof of this is in Appendix C in the supplement. The only downside to our results is that we need the budget B to be Ω(m1/2T 3/4). Getting similar bounds for budgets as small as B = Θ(m √ T ) is an interesting open problem. (This also indicates that this is indeed a harder problem than all the special cases.) Near-optimality of regret bounds. In [12], it was shown that for the linear contextual bandits problem, no online algorithm can achieve a regret bound better than Ω(m √ T ). In fact, they prove this lower bound for linear contextual bandits with static contexts. Since that problem is a special case of the linCBwK problem with d = 1, this shows that the dependence on m and T in the above regret bound is optimal upto log factors. For general contextual bandits with resource constraints, the bounds of [5, 10] are near optimal. Relation to BwK [3] and OSPP [4]. It is easy to see that the linCBwK problem is a generalization of the linear contextual bandits problem [1, 8, 11]. There, the outcome is scalar and the goal is to simply maximize the sum of these. Remarkably, the linCBwK problem also turns out to be a common generalization of the bandits with knapsacks (BwK) problem considered in [9, 3], and the online stochastic packing problem (OSPP) studied by [13, 6, 15, 14, 4]. In both BwK and OSPP, the outcome of every round t is a reward rt and a vector vt and the goal of the algorithm is to maximize ∑T t=1 rt while ensuring that ∑T t=1 vt ≤ B1. The problems differ in how these rewards and vectors are picked. In the OSPP problem, in every round t, the algorithm may pick any reward,vector pair from a given set At of d + 1-dimensional vectors. The set At is drawn i.i.d. from an unknown distribution over sets of vectors. This corresponds to the special case of linCBwK, where m = d+ 1 and the context xt(a) itself is equal to (rt(a),vt(a)). In the BwK problem, there is a fixed set of arms, and for each arm there is an unknown distribution over reward,vector pairs. The algorithm picks an arm and a reward,vector pair is drawn from the corresponding distribution for that arm. This 3Similar to the regret bounds for linear contextual bandits [8, 1, 11]. corresponds to the special case of linCBwK, where m = K and the context Xt = I, the identity matrix, for all t. We use techniques from all three special cases: our algorithms follow the primal-dual paradigm and use an online learning algorithm to search the dual space, as was done in [3]. In order to deal with linear contexts, we use techniques from [1, 8, 11] to estimate the weight matrix W∗, and define “optimistic estimates” of W∗. We also use the technique of combining the objective and the constraints using a certain tradeoff parameter and that was introduced in [4]. Further new difficulties arise, such as in estimating the optimum value from the first few rounds, a task that follows from standard techniques in each of the special cases but is very challenging here. We develop a new way of exploration that uses the linear structure, so that one can evaluate all possible choices that could have led to an optimum solution on the historic sample. This technique might be of independent interest in estimating optimum values. One can see that the problem is indeed more than the sum of its parts, from the fact that we get the optimal bound for linCBwK only when B ≥ Ω̃(m1/2T 3/4), unlike either special case for which the optimal bound holds for all B (but is meaningful only for B = Ω̃(m √ T )). The approach in [3] (for BwK) extends to the case of “static” contexts,4 where each arm has a context that doesn’t change over time. The OSPP of [4] is not a special case of linCBwK with static contexts; this is one indication of the additional difficulty of dynamic over static contexts. Other related work. Recently, [17] showed an O( √ T ) regret in the linear contextual setting with a single budget constraint, when costs depend only on contexts and not arms. Due to space constraints, we have moved many proofs from the main part of the paper to the supplement. 2 Preliminaries 2.1 Confidence Ellipsoid Consider a stochastic process which in each round t, generates a pair of observations (rt,yt), such that rt is an unknown linear function of yt plus some 0-mean bounded noise, i.e., rt = µ ⊤ ∗ yt + ηt, where yt,µ∗ ∈ Rm, |ηt| ≤ 2R, and E[ηt|y1, r1, . . . ,yt−1, rt−1,yt] = 0. At any time t, a high confidence estimate of the unknown vector µ∗ can be obtained by building a “confidence ellipsoid” around the ℓ2-regularized least-squares estimate µ̂t constructed from the observations made so far. This technique is common in prior work on linear contextual bandits (e.g., in [8, 11, 1]). For any regularization parameter λ > 0, let Mt := λI + ∑t−1 i=1 yiy ⊤ i , and µ̂t := M −1 t ∑t−1 i=1 yiri. The following result from [1] shows that µ∗ lies with high probability in an ellipsoid with center µ̂t. For any positive semi-definite (PSD) matrix M, define the M -norm as ‖µ‖M := √ µ⊤Mµ. The confidence ellipsoid at time t is defined as Ct := { µ ∈ Rm : ‖µ− µ̂t‖Mt ≤ R √ m log ((1+tm/λ)/δ) + √ λm } . Lemma 2 (Theorem 2 of [1]). If ∀ t, ‖µ∗‖2 ≤ √ m and ‖yt‖2 ≤ √ m, then with prob. 1 − δ, µ∗ ∈ Ct. Another useful observation about this construction is stated below. It first appeared as Lemma 11 of [8], and was also proved as Lemma 3 in [11]. Lemma 3 (Lemma 11 of [8]). ∑T t=1 ‖yt‖M−1t ≤ √ mT log(T ). As a corollary of the above two lemmas, we obtain a bound on the total error in the estimate provided by “any point” from the confidence ellipsoid. (Proof is in Appendix D in the supplement.) 4It was incorrectly claimed in [3] that the approach can be extended to dynamic contexts without much modifications. Corollary 1. For t = 1, . . . , T , let µ̃t ∈ Ct be a point in the confidence ellipsoid, with λ = 1 and 2R = 1. Then, with probability 1− δ, ∑T t=1 |µ̃⊤t yt − µ⊤∗ yt| ≤ 2m √ T log ((1+Tm)/δ) log(T ). 2.2 Online Learning Consider a T round game played between an online learner and an adversary, where in round t, the learner chooses a θt ∈ Ω := {θ : ‖θ‖1 ≤ 1,θ ≥ 0}, and then observes a linear function gt : Ω → [−1, 1] picked by the adversary. The learner’s choice θt may only depend on learner’s and adversary’s choices in previous rounds. The goal of the learner is to minimize regret defined as the difference between the learner’s objective value and the value of the best single choice in hindsight: R(T ) := maxθ∈Ω ∑T t=1 gt(θ)− ∑T t=1 gt(θt). The multiplicative weight update (MWU) algorithm (generalization by [7]) is a fast and efficient online learning algorithm for this problem. Let gt,j := gt(1j). Then, given a parameter ǫ > 0, in round t+ 1, the choice of this algorithm takes the following form, θt+1,j = wt,j 1 + ∑ j wt,j , where wt,j = { wt−1,j(1 + ǫ) gt,j if gt,j > 0, wt−1,j(1− ǫ)−gt,j if gt,j ≤ 0. (5) with initialization w0,j = 1, for all j = 1, . . . ,K. Lemma 4. [7] For any 0 < ǫ ≤ 12 , the MWU algorithm provides the following regret bound for the online learning problem described above: R(T ) ≤ ǫT + log(d+1)ǫ . In particular, for ǫ = √ log(d+1) T , we have R(T ) ≤ √ log(d+ 1)T For the rest of the paper, we refer to the MWU algorithm with ǫ = √ log(d+1) T as the online learning (OL) algorithm, and the update in (5) as the OL update at time t+ 1. 3 Algorithm 3.1 Optimistic estimates of unknown parameters Let at denote the arm played by the algorithm at time t. In the beginning of every round, we use the outcomes and contexts from previous rounds to construct a confidence ellipsoid for µ∗ and every column of W∗. The construction of confidence ellipsoid for µ∗ follows directly from the techniques in Section 2.1 with yt = xt(at) and rt being reward at time t. To construct a confidence ellipsoid for a column j of W∗, we use the techniques in Section 2.1 while substituting yt = xt(at) and rt = vt(at)j for every j. As in Section 2.1, let Mt := I + ∑t−1 i=1 xi(ai)xi(ai) ⊤, and construct the regularized least squares estimate for µ∗,W∗, respectively, as µ̂t := M −1 t ∑t−1 i=1 xi(ai)ri(ai) ⊤ (6) Ŵt := M −1 t ∑t−1 i=1 xi(ai)vi(ai) ⊤. (7) Define confidence ellipsoid for parameter µ∗ as Ct,0 := { µ ∈ Rm : ‖µ− µ̂‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and for every arm a, the optimistic estimate of µ∗ as: µ̃t(a) := argmaxµ∈Ct,0 xt(a) ⊤µ. (8) Let wj denote the j th column of a matrix W . We define a confidence ellipsoid for each column j, as Ct,j := { w ∈ Rm : ‖w − ŵtj‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and denote by Gt, the Cartesian product of all these ellipsoids: Gt := {W ∈ Rm×d : wj ∈ Ct,j}. Note that Lemma 2 implies that W∗ ∈ Gt with probability 1− δ. Now, given a vector θt ∈ Rd, we define the optimistic estimate of the weight matrix at time t w.r.t. θt, for every arm a ∈ [K], as : W̃t(a) := argminW∈Gt xt(a) ⊤Wθt. (9) Intuitively, for the reward, we want an upper confidence bound and for the consumption we want a lower confidence bound as an optimistic estimate. This intuition aligns with the above definitions, where the maximizer was used in case of reward and a minimizer was used for consumption. The utility and precise meaning of θt will become clearer when we describe the algorithm and present the regret analysis. Using the definition of µ̃t, W̃t, along with the results in Lemma 2 and Corollary 1 about confidence ellipsoids, the following can be derived. Corollary 2. With probability 1− δ, for any sequence of θ1,θ2, . . . ,θT , 1. xt(a) ⊤µ̃t(a) ≥ xt(a)⊤µ∗, for all arms a ∈ [K], for all time t. 2. xt(a) ⊤W̃t(a)θt ≤ xt(a)⊤W∗θt, for all arms a ∈ [K], for all time t. 3. | ∑T t=1(µ̃t(at)− µ∗)⊤xt(at)| ≤ ( 2m √ T log ((1+tm)/δ) log(T ) ) . 4. ‖ ∑T t=1(W̃t(at)−W∗)⊤xt(at)‖ ≤ ‖1d‖ ( 2m √ T log ((d+tmd)/δ) log(T ) ) . Essentially, the first two claims ensure that we have optimistic estimates, and the last two claims ensure that the estimates quickly converge to the true parameters. 3.2 The core algorithm In this section, we present an algorithm and its analysis, under the assumption that a parameter Z satisfying certain properties is given. Later, we show how to use the first T0 rounds to compute such a Z, and also bound the additional regret due to these T0 rounds. We define Z now. Assumption 1. Let Z be such that for some universal constants c, c′, OPTB ≤ Z ≤ cOPTB + c′. The algorithm constructs estimates µ̂t and Ŵt as in Section 3.1. It also runs the OL algorithm for an instance of the online learning problem. The vector played by the OL algorithm in time step t is θt. After observing the context, the optimistic estimates for each arm are then constructed using θt, as defined in (8) and (9). Intuitively, θt is used here as a multiplier to combine different columns of the weight matrix, to get an optimistic weight vector for every arm. An adjusted estimated reward for arm a is then defined by using Z to linearly combine the optimistic estimate of the reward with the optimistic estimate of the consumption, as (xt(a) ⊤µ̃t(a))− Z(xt(a)⊤W̃t(a)θt). The algorithm chooses the arm which appears to be the best according to the adjusted estimated reward. After observing the resulting reward and consumption vectors, the estimates are updated. The online learning algorithm is advanced by one step, by defining the profit vector to be vt(at) − BT 1. The algorithm ends either after T time steps or as soon as the total consumption exceeds the budget along some dimension. Theorem 2. Given a Z as per Assumption 1, Algorithm 1 achieves the following, with prob. 1− δ: regret(T ) ≤ O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . (Proof Sketch) We provide a sketch of the proof here, with a full proof given in Appendix E in the supplement. Let τ be the stopping time of the algorithm. The proof is in 3 steps: Step 1: Since E[vt(at)|Xt, at, Ht−1] = W⊤∗ xt(at), we apply Azuma-Hoeffding inequality to get that with high probability ∥ ∥ ∑τ t=1 vt(at)−W⊤∗ xt(at) ∥ ∥ ∞ is small. Therefore, we can work with ∑τ t=1 W ⊤ ∗ xt(at) instead of ∑τ t=1 vt(at). A similar application of Azuma-Hoeffding inequality is used to bound the gap | ∑τ t=1 rt(at) − µ⊤∗ xt(at)|, so that a lower bound on ∑τ t=1 µ ⊤ ∗ xt(at) is sufficient to lower bound the total reward ∑τ t=1 rt(at). Algorithm 1 Algorithm for linCBwK, with given Z Initialize θ1 as per the online learning (OL) algorithm. Initialize Z that satisfies Assumption 1. for all t = 1, ..., T do Observe Xt. For every a ∈ [K], compute µ̃t(a) and W̃t(a) as per (8) and (9) respectively. Play the arm at := argmaxa∈[K] xt(a) ⊤(µ̃t(a)− ZW̃t(a)θt). Observe rt(at) and vt(at). If for some j = 1..d, ∑ t′≤t vt′(at′) · ej ≥ B then EXIT. Use xt(at), rt(at) and vt(at) to obtain µ̂t+1, Ŵt+1 and Gt+1. Choose θt+1 using the OL update (refer to (5)) with gt(θt) := θt · ( vt(at)− BT 1 ) . end for Step 2: Using Corollary 2, with high probability, we can bound ∥ ∥ ∥ ∑T t=1(W∗ − W̃t(at))⊤xt(at) ∥ ∥ ∥ ∞ . It is therefore sufficient to work with the sum of vectors W̃t(at) ⊤xt(at) instead of W ⊤ ∗ xt(at), and similarly with µ̃t(at) ⊤xt(at) instead of µ ⊤ ∗ xt(at). Step 3: The proof is completed by showing the desired bound on OPT − ∑τ t=1 µ̃t(at) ⊤xt(at). This part is similar to the online stochastic packing problem; if the actual reward and consumption vectors were µ̃t(at) ⊤xt(at) and W̃t(at) ⊤xt(at), then it would be exactly that problem. We adapt techniques from [4]: use the OL algorithm and the Z parameter to combine constraints into the objective. If a dimension is being consumed too fast, then the multiplier for that dimension should increase, making the algorithm to pick arms that are not likely to consume too much along this dimension. Regret is then bounded by a combination of the online learning regret and the error in the optimistic estimates. 3.3 Algorithm with Z computation In this section, we present a modification of Algorithm 1 which computes the required parameter Z that satisfies Assumption 1, and therefore does not need to be provided with a Z as input. The algorithm computes Z using observations from the first T0 rounds. Once Z is computed, Algorithm 1 can be run for the remaining time steps. However, it needs to be modified slightly to take into account the budget consumed during the first T0 rounds. We handle this by using a smaller budget B′ = B − T0 in the computations for the remaining rounds. The modified algorithm is given below. Algorithm 2 Algorithm for linCBwK, with Z computation Inputs: B, T0, B ′ = B − T0 Using observations from first T0 rounds, compute a Z that satisfies Assumption 1. Run Algorithm 1 for T − T0 rounds and budget B′. Next, we provide the details of how to compute Z from observations in the first T0 rounds, and how to choose T0. We provide a method that takes advantage of the linear structure of the problem, and explores in the m-dimensional space of contexts and weight vectors to obtain bounds independent of K. In every round t = 1, . . . , T0, after observing Xt, let pt ∈ ∆[K] be pt := arg max p∈∆[K] ‖Xtp‖M−1t , (10) where Mt := I + ∑t−1 i=1(Xipi)(Xipi) ⊤. (11) Select arm at = a with probability pt(a). In fact, since Mt is a PSD matrix, due to convexity of the function ‖Xtp‖2M−1t , it is the same as playing at = argmaxa∈[K] ‖xt(a)‖M−1t . Construct estimates µ̂, Ŵt of µ∗,W∗ at time t as µ̂t := M −1 t ∑t−1 i=1(Xipi)ri(ai), Ŵt := M −1 t ∑t−1 i=1(Xipi)vi(ai) ⊤. And, for some value of γ defined later, obtain an estimate ˆOPT γ of OPT as: ˆOPT γ := maxπ T T0 ∑T0 i=1 µ̂ ⊤ i Xiπ(Xi) such that TT0 ∑T0 i=1 Ŵ ⊤ i Xiπ(Xi) ≤ B + γ. (12) For an intuition about the choice of arm in (10), observe from the discussion in Section 2.1 that every column w∗j of W∗ is guaranteed to lie inside the confidence ellipsoid centered at column ŵtj of Ŵt, namely the ellipsoid, ‖w − ŵtj‖2Mt ≤ 4m log(Tm/δ). Note that this ellipsoid has principle axes as eigenvectors of Mt, and the length of the semi-principle axes is given by the inverse eigenvalues of Mt. Therefore, by maximizing ‖Xtp‖M−1t we are choosing the context closest to the direction of the longest principal axis of the confidence ellipsoid, i.e. in the direction of the maximum uncertainty. Intuitively, this corresponds to pure exploration: by making an observation in the direction where uncertainty is large we can reduce the uncertainty in our estimate most effectively. A more algebraic explanation is as follows. In order to get a good estimate of OPT by ˆOPT γ , we want the estimates Ŵt and W∗ (and, µ̂ and µ∗) to be close enough so that ‖ ∑T0 t=1(Ŵt−Ŵ∗)⊤Xtπ(Xt)‖∞ (and, |∑T0t=1(µ̂t − µ∗)⊤Xtπ(Xt)|) is small for all policies π, and in particular for sample optimal policies. Now, using Cauchy-Schwartz these are bounded by ∑T0 t=1 ‖µ̂t − µ∗‖Mt‖Xtπ(Xt))‖M−1t , and ∑T0 t=1 ‖Ŵt −W∗‖Mt‖Xtπ(Xt))‖M−1t , where we define ‖W‖M , the M -norm of matrix W to be the max of column-wise M -norms. Using Lemma 2, the term ‖µ̂t−µ∗‖Mt is bounded by 2 √ m log(T0m/δ) , and ‖Ŵt−W∗‖Mt is bounded by 2 √ m log(T0md/δ), with probability 1−δ. Lemma 3 bounds the second term ∑T0 t=1 ‖Xtπ(Xt)‖M−1t but only when π is the played policy. This is where we use that the played policy pt was chosen to maximize ‖Xtpt‖M−1t , so that ∑T0 t=1 ‖Xtπ(Xt)‖M−1t ≤ ∑T0 t=1 ‖Xtpt‖M−1t and the bound ∑T0 t=1 ‖Xtpt‖M−1t ≤ √ mT0 log(T0) given by Lemma 3 actually bounds ∑T0 t=1 ‖Xtπ(Xt)‖M−1t for all π. Combining, we get a bound of 2m √ T0log(T0) log(T0d/δ) on deviations ‖ ∑T0 t=1(Ŵt − Ŵ∗) ⊤Xtπ(Xt)‖∞ and | ∑T0 t=1(µ̂t − µ∗)⊤Xtπ(Xt)| for all π. We prove the following lemma. Lemma 5. For γ = ( T T0 ) 2m √ T0log(T0) log(T0d/δ), with probability 1−O(δ), OPT − 2γ ≤ ˆOPT2γ ≤ OPT + 9γ(OPTB + 1). Corollary 3. Set Z = ( ˆOPT 2γ +2γ) B + 1, with the above value of γ. Then, with probability 1−O(δ), OPT B + 1 ≤ Z ≤ (1 + 11γ B )( OPT B + 1). Corollary 3 implies that as long as B ≥ γ, i.e., B ≥ Ω̃( mT√ T0 ), Z is a constant factor approximation of OPT B +1 ≥ Z∗, therefore Theorem 2 should provide an Õ ( (OPTB + 1)m √ T ) regret bound. However, this bound does not account for the budget consumed in the first T0 rounds. Considering that (at most) T0 amount can be consumed from the budget in the first T0 rounds, we have an additional regret of OPT B T0. Further, since we have B ′ = B − T0 budget for remaining T − T0 rounds, we need a Z that satisfies the required assumption for B′ instead of B (i.e., we need OPTB′ ≤ Z ≤ O(1) ( OPT B′ + 1 ) ). If B ≥ 2T0, then, B′ ≥ B/2, and using 2 times the Z computed in Corollary 3 would satisfy the required assumption. Together, these observations give Theorem 3. Theorem 3. Using Algorithm 2 with T0 such that B > max{2T0,mT/ √ T0}, and twice the Z given by Corollary 3, we get a high probability regret bound of Õ ( ( OPT B + 1 ) ( T0 +m √ T )) . In particular, for B > m1/2T 3/4 and m ≤ √ T , we can use T0 = m √ T to get a regret bound of Õ ( ( OPT B + 1 ) m √ T ) .
1. What is the focus of the paper in terms of the problem it addresses? 2. What is the contribution of the proposed algorithm in solving the linear contextual bandit with knapsacks problem? 3. What are the strengths of the paper regarding its clarity and potential value? 4. Do you have any concerns about the lack of experimental validation? 5. How does the reviewer assess the novelty and significance of the problem addressed in the paper?
Review
Review The authors provide an algorithm for solving the linear contextual bandit with knapsacks problem. In this problem, the algorithm plays a series of rounds where in each round the algorithm is presented with a "context vector" x \in R^m and must choose an action in [1..K], or a "no-op" action. The algorithm then receives a reward r and a resource consumption vector v. r=0 and v=0 for the no-op action. r and v are stochastic values whose expectation for each action is a fixed linear function of the context. The goal is to maximize the sum of rewards while keeping each component of the sum of consumption vectors below some budget B. The authors provide an algorithm that achieves with high probability achieves regret \tilde O( optimal_loss/B m sqrt(T) ).This paper is clearly written and provides an answer to an interesting variation on contextual bandits. It would have been valuable to see some experimental validation that the algorithm performs as stated. The linear contextual bandits with knapsacks problem is sufficiently narrow that the algorithm will probably not see widespread use, although the advertising case is potentially valuable - again, some experiments are necessary to make this convincing.
NIPS
Title Linear Contextual Bandits with Knapsacks Abstract We consider the linear contextual bandit problem with resource consumption, in addition to reward generation. In each round, the outcome of pulling an arm is a reward as well as a vector of resource consumptions. The expected values of these outcomes depend linearly on the context of that arm. The budget/capacity constraints require that the total consumption doesn’t exceed the budget for each resource. The objective is once again to maximize the total reward. This problem turns out to be a common generalization of classic linear contextual bandits (linContextual) [8, 11, 1], bandits with knapsacks (BwK) [3, 9], and the online stochastic packing problem (OSPP) [4, 14]. We present algorithms with near-optimal regret bounds for this problem. Our bounds compare favorably to results on the unstructured version of the problem [5, 10] where the relation between the contexts and the outcomes could be arbitrary, but the algorithm only competes against a fixed set of policies accessible through an optimization oracle. We combine techniques from the work on linContextual, BwK and OSPP in a nontrivial manner while also tackling new difficulties that are not present in any of these special cases. 1 Introduction In the contextual bandit problem [8, 2], the decision maker observes a sequence of contexts (or features). In every round she needs to pull one out of K arms, after observing the context for that round. The outcome of pulling an arm may be used along with the contexts to decide future arms. Contextual bandit problems have found many useful applications such as online recommendation systems, online advertising, and clinical trials, where the decision in every round needs to be customized to the features of the user being served. The linear contextual bandit problem [1, 8, 11] is a special case of the contextual bandit problem, where the outcome is linear in the feature vector encoding the context. As pointed by [2], contextual bandit problems represent a natural half-way point between supervised learning and reinforcement learning: the use of features to encode contexts and the models for the relation between these feature vectors and the outcome are often inherited from supervised learning, while managing the exploration-exploitation tradeoff is necessary to ensure good performance in reinforcement learning. The linear contextual bandit problem can thus be thought of as a midway between the linear regression model of supervised learning, and reinforcement learning. Recently, there has been a significant interest in introducing multiple “global constraints” in the standard bandit setting [9, 3, 10, 5]. Such constraints are crucial for many important real-world applications. For example, in clinical trials, the treatment plans may be constrained by the total availability of medical facilities, drugs and other resources. In online advertising, there are budget constraints that restrict the number of times an ad is shown. Other applications include dynamic pricing, dynamic procurement, crowdsourcing, etc.; see [9, 3] for many such examples. In this paper, we consider the linear contextual bandit with knapsacks (henceforth, linCBwK) problem. In this problem, the context vectors are generated i.i.d. in every round from some unknown distribution, and on picking an arm, a reward and a consumption vector is observed, which depend ∗Columbia University. [email protected]. †Microsoft Research. [email protected]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. linearly on the context vector. The aim of the decision maker is to maximize the total reward while ensuring that the total consumption of every resource remains within a given budget. Below, we give a more precise definition of this problem. We use the following notational convention throughout: vectors are denoted by bold face lower case letters, while matrices are denoted by regular face upper case letters. Other quantities such as sets, scalars, etc. may be of either case, but never bold faced. All vectors are column vectors, i.e., a vector in n dimensions is treated as an n× 1 matrix. The transpose of matrix A is A⊤. Definition 1 (linCBwK). There are K “arms”, which we identify with the set [K]. The algorithm is initially given as input a budget B ∈ R+. In every round t, the algorithm first observes context xt(a) ∈ [0, 1]m for every arm a, and then chooses an arm at ∈ [K], and finally observes a reward rt(at) ∈ [0, 1] and a d-dimensional consumption vector vt(at) ∈ [0, 1]d. The algorithm has a “no-op” option, which is to pick none of the arms and get 0 reward and 0 consumption. The goal of the algorithm is to pick arms such that the total reward ∑T t=1 rt(at) is maximized, while ensuring that the total consumption does not exceed the budget, i.e., ∑ t vt(at) ≤ B1. We make the following stochastic assumption for context, reward, and consumption vectors. In every round t, the tuple {xt(a), rt(a),vt(a)}Ka=1 is generated from an unknown distribution D, independent of everything in previous rounds. Also, there exists an unknown vector µ∗ ∈ [0, 1]m and a matrix W∗ ∈ [0, 1]m×d such that for every arm a, given contexts xt(a), and history Ht−1 before time t, E[rt(a)|xt(a), Ht−1] = µ⊤∗ xt(a), E[vt(a)|xt(a), Ht−1] = W⊤∗ xt(a). (1) For succinctness, we will denote the tuple of contexts for K arms at time t as matrix Xt ∈ [0, 1]m×K , with xt(a) being the a th column of this matrix. Similarly, rewards and consumption vectors at time t are represented as the vector rt ∈ [0, 1]K and the matrix Vt ∈ [0, 1]d×K respectively. As we discuss later in the text, the assumption in equation (1) forms the primary distinction between our linear contextual bandit setting and the general contextual bandit setting considered in [5]. Exploiting this linearity assumption will allow us to generate regret bounds which do not depend on the number of arms K, rendering it to be especially useful when the number of arms is large. Some examples of this include recommendation systems with large number of products (e.g., retail products, travel packages, ad creatives, sponsored facebook posts). Another advantage over using the general contextual bandit setting of [5] is that we don’t need an oracle access to a certain optimization problem, which in this case is required to solve an NP-Hard problem. (See Section 1.1 for a more detailed discussion.) We compare the performance of an algorithm to that of an optimal adaptive policy that knows the distribution D and the parameters (µ∗,W∗), and can take into account the history up to that point, as well as the current context, to decide (possibly with randomization) which arm to pull at time t. However, it is easier to work with an upper bound on this, which is the optimal expected reward of a static policy that is required to satisfy the constraints only in expectation. This technique has been used in several related problems and is standard by now [14, 9]. Definition 2 (Optimal Static Policy). A context-dependent non-adaptive policy π is a mapping from context space [0, 1]m×K to Ω = {p ∈ [0, 1]K : ‖p‖1 ≤ 1}, where π(X)i denotes the probability of playing arm i when the context is X , and 1−∑Ki=1 π(X)i is the probability of no-op. Define r(π) and v(π) to be the expected reward and consumption vector of policy π, respectively, i.e. r(π) := E(X,r,V )∼D[rπ(X)] = EX∼D[µ ⊤ ∗ Xπ(X)]. (2) v(π) := E(X,r,V )∼D[V π(X)] = EX∼D[W ⊤ ∗ Xπ(X)]. (3) Let π∗ := argmaxπ T r(π) such that T v(π) ≤ B1 (4) be the optimal static policy. Note that since no-op is allowed, a feasible policy always exists. We denote the value of this optimal static policy by OPT := T r(π∗). The following lemma proves that OPT upper bounds the value of an optimal adaptive policy. Proof is in Appendix B in the supplement. Lemma 1. Let OPT denote the value of an optimal adaptive policy that knows the distribution D and parameters µ∗,W∗. Then OPT ≥ OPT. Definition 3 (Regret). Let at be the arm played at time t by the algorithm. Then, regret is defined as regret(T ) := OPT − T ∑ t=1 rt(at). 1.1 Main results Our main result is an algorithm with near-optimal regret bound for linCBwK. Theorem 1. There is an algorithm for linCBwK such that if B > m1/2T 3/4, then with probability at least 1− δ, regret(T ) = O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . Relation to general contextual bandits. There have been recent papers [5, 10] that solve problems similar to linCBwK but for general contextual bandits. In these papers the relation between contexts and outcome vectors is arbitrary and the algorithms compete with an arbitrary fixed set of context dependent policies Π accessible via an optimization oracle, with regret bounds being O ( (OPTB + 1) √ KT log(dT |Π|/δ) ) . These approaches could potentially be applied to the linear setting using a set Π of linear context dependent policies. Comparing their bounds with ours, in our results, essentially a √ K log(|Π|) factor is replaced by a factor of m. Most importantly, we have no dependence on K,3 which enables us to consider problems with large action spaces. Further, suppose that we want to use their result with the set of linear policies, i.e., policies of the form, for some fixed θ ∈ ℜm, arg max a∈[K] {xt(a)⊤θ}. Then, their algorithms would require access to an “Arg-Max Oracle” that can find the best such policy (maximizing total reward) for a given set of contexts and rewards (no resource consumption). In fact, by a reduction from the problem of learning halfspaces with noise [16], we can show that the optimization problem underlying such an “Arg-Max Oracle” problem is NP-Hard, making such an approach computationally expensive. The proof of this is in Appendix C in the supplement. The only downside to our results is that we need the budget B to be Ω(m1/2T 3/4). Getting similar bounds for budgets as small as B = Θ(m √ T ) is an interesting open problem. (This also indicates that this is indeed a harder problem than all the special cases.) Near-optimality of regret bounds. In [12], it was shown that for the linear contextual bandits problem, no online algorithm can achieve a regret bound better than Ω(m √ T ). In fact, they prove this lower bound for linear contextual bandits with static contexts. Since that problem is a special case of the linCBwK problem with d = 1, this shows that the dependence on m and T in the above regret bound is optimal upto log factors. For general contextual bandits with resource constraints, the bounds of [5, 10] are near optimal. Relation to BwK [3] and OSPP [4]. It is easy to see that the linCBwK problem is a generalization of the linear contextual bandits problem [1, 8, 11]. There, the outcome is scalar and the goal is to simply maximize the sum of these. Remarkably, the linCBwK problem also turns out to be a common generalization of the bandits with knapsacks (BwK) problem considered in [9, 3], and the online stochastic packing problem (OSPP) studied by [13, 6, 15, 14, 4]. In both BwK and OSPP, the outcome of every round t is a reward rt and a vector vt and the goal of the algorithm is to maximize ∑T t=1 rt while ensuring that ∑T t=1 vt ≤ B1. The problems differ in how these rewards and vectors are picked. In the OSPP problem, in every round t, the algorithm may pick any reward,vector pair from a given set At of d + 1-dimensional vectors. The set At is drawn i.i.d. from an unknown distribution over sets of vectors. This corresponds to the special case of linCBwK, where m = d+ 1 and the context xt(a) itself is equal to (rt(a),vt(a)). In the BwK problem, there is a fixed set of arms, and for each arm there is an unknown distribution over reward,vector pairs. The algorithm picks an arm and a reward,vector pair is drawn from the corresponding distribution for that arm. This 3Similar to the regret bounds for linear contextual bandits [8, 1, 11]. corresponds to the special case of linCBwK, where m = K and the context Xt = I, the identity matrix, for all t. We use techniques from all three special cases: our algorithms follow the primal-dual paradigm and use an online learning algorithm to search the dual space, as was done in [3]. In order to deal with linear contexts, we use techniques from [1, 8, 11] to estimate the weight matrix W∗, and define “optimistic estimates” of W∗. We also use the technique of combining the objective and the constraints using a certain tradeoff parameter and that was introduced in [4]. Further new difficulties arise, such as in estimating the optimum value from the first few rounds, a task that follows from standard techniques in each of the special cases but is very challenging here. We develop a new way of exploration that uses the linear structure, so that one can evaluate all possible choices that could have led to an optimum solution on the historic sample. This technique might be of independent interest in estimating optimum values. One can see that the problem is indeed more than the sum of its parts, from the fact that we get the optimal bound for linCBwK only when B ≥ Ω̃(m1/2T 3/4), unlike either special case for which the optimal bound holds for all B (but is meaningful only for B = Ω̃(m √ T )). The approach in [3] (for BwK) extends to the case of “static” contexts,4 where each arm has a context that doesn’t change over time. The OSPP of [4] is not a special case of linCBwK with static contexts; this is one indication of the additional difficulty of dynamic over static contexts. Other related work. Recently, [17] showed an O( √ T ) regret in the linear contextual setting with a single budget constraint, when costs depend only on contexts and not arms. Due to space constraints, we have moved many proofs from the main part of the paper to the supplement. 2 Preliminaries 2.1 Confidence Ellipsoid Consider a stochastic process which in each round t, generates a pair of observations (rt,yt), such that rt is an unknown linear function of yt plus some 0-mean bounded noise, i.e., rt = µ ⊤ ∗ yt + ηt, where yt,µ∗ ∈ Rm, |ηt| ≤ 2R, and E[ηt|y1, r1, . . . ,yt−1, rt−1,yt] = 0. At any time t, a high confidence estimate of the unknown vector µ∗ can be obtained by building a “confidence ellipsoid” around the ℓ2-regularized least-squares estimate µ̂t constructed from the observations made so far. This technique is common in prior work on linear contextual bandits (e.g., in [8, 11, 1]). For any regularization parameter λ > 0, let Mt := λI + ∑t−1 i=1 yiy ⊤ i , and µ̂t := M −1 t ∑t−1 i=1 yiri. The following result from [1] shows that µ∗ lies with high probability in an ellipsoid with center µ̂t. For any positive semi-definite (PSD) matrix M, define the M -norm as ‖µ‖M := √ µ⊤Mµ. The confidence ellipsoid at time t is defined as Ct := { µ ∈ Rm : ‖µ− µ̂t‖Mt ≤ R √ m log ((1+tm/λ)/δ) + √ λm } . Lemma 2 (Theorem 2 of [1]). If ∀ t, ‖µ∗‖2 ≤ √ m and ‖yt‖2 ≤ √ m, then with prob. 1 − δ, µ∗ ∈ Ct. Another useful observation about this construction is stated below. It first appeared as Lemma 11 of [8], and was also proved as Lemma 3 in [11]. Lemma 3 (Lemma 11 of [8]). ∑T t=1 ‖yt‖M−1t ≤ √ mT log(T ). As a corollary of the above two lemmas, we obtain a bound on the total error in the estimate provided by “any point” from the confidence ellipsoid. (Proof is in Appendix D in the supplement.) 4It was incorrectly claimed in [3] that the approach can be extended to dynamic contexts without much modifications. Corollary 1. For t = 1, . . . , T , let µ̃t ∈ Ct be a point in the confidence ellipsoid, with λ = 1 and 2R = 1. Then, with probability 1− δ, ∑T t=1 |µ̃⊤t yt − µ⊤∗ yt| ≤ 2m √ T log ((1+Tm)/δ) log(T ). 2.2 Online Learning Consider a T round game played between an online learner and an adversary, where in round t, the learner chooses a θt ∈ Ω := {θ : ‖θ‖1 ≤ 1,θ ≥ 0}, and then observes a linear function gt : Ω → [−1, 1] picked by the adversary. The learner’s choice θt may only depend on learner’s and adversary’s choices in previous rounds. The goal of the learner is to minimize regret defined as the difference between the learner’s objective value and the value of the best single choice in hindsight: R(T ) := maxθ∈Ω ∑T t=1 gt(θ)− ∑T t=1 gt(θt). The multiplicative weight update (MWU) algorithm (generalization by [7]) is a fast and efficient online learning algorithm for this problem. Let gt,j := gt(1j). Then, given a parameter ǫ > 0, in round t+ 1, the choice of this algorithm takes the following form, θt+1,j = wt,j 1 + ∑ j wt,j , where wt,j = { wt−1,j(1 + ǫ) gt,j if gt,j > 0, wt−1,j(1− ǫ)−gt,j if gt,j ≤ 0. (5) with initialization w0,j = 1, for all j = 1, . . . ,K. Lemma 4. [7] For any 0 < ǫ ≤ 12 , the MWU algorithm provides the following regret bound for the online learning problem described above: R(T ) ≤ ǫT + log(d+1)ǫ . In particular, for ǫ = √ log(d+1) T , we have R(T ) ≤ √ log(d+ 1)T For the rest of the paper, we refer to the MWU algorithm with ǫ = √ log(d+1) T as the online learning (OL) algorithm, and the update in (5) as the OL update at time t+ 1. 3 Algorithm 3.1 Optimistic estimates of unknown parameters Let at denote the arm played by the algorithm at time t. In the beginning of every round, we use the outcomes and contexts from previous rounds to construct a confidence ellipsoid for µ∗ and every column of W∗. The construction of confidence ellipsoid for µ∗ follows directly from the techniques in Section 2.1 with yt = xt(at) and rt being reward at time t. To construct a confidence ellipsoid for a column j of W∗, we use the techniques in Section 2.1 while substituting yt = xt(at) and rt = vt(at)j for every j. As in Section 2.1, let Mt := I + ∑t−1 i=1 xi(ai)xi(ai) ⊤, and construct the regularized least squares estimate for µ∗,W∗, respectively, as µ̂t := M −1 t ∑t−1 i=1 xi(ai)ri(ai) ⊤ (6) Ŵt := M −1 t ∑t−1 i=1 xi(ai)vi(ai) ⊤. (7) Define confidence ellipsoid for parameter µ∗ as Ct,0 := { µ ∈ Rm : ‖µ− µ̂‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and for every arm a, the optimistic estimate of µ∗ as: µ̃t(a) := argmaxµ∈Ct,0 xt(a) ⊤µ. (8) Let wj denote the j th column of a matrix W . We define a confidence ellipsoid for each column j, as Ct,j := { w ∈ Rm : ‖w − ŵtj‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and denote by Gt, the Cartesian product of all these ellipsoids: Gt := {W ∈ Rm×d : wj ∈ Ct,j}. Note that Lemma 2 implies that W∗ ∈ Gt with probability 1− δ. Now, given a vector θt ∈ Rd, we define the optimistic estimate of the weight matrix at time t w.r.t. θt, for every arm a ∈ [K], as : W̃t(a) := argminW∈Gt xt(a) ⊤Wθt. (9) Intuitively, for the reward, we want an upper confidence bound and for the consumption we want a lower confidence bound as an optimistic estimate. This intuition aligns with the above definitions, where the maximizer was used in case of reward and a minimizer was used for consumption. The utility and precise meaning of θt will become clearer when we describe the algorithm and present the regret analysis. Using the definition of µ̃t, W̃t, along with the results in Lemma 2 and Corollary 1 about confidence ellipsoids, the following can be derived. Corollary 2. With probability 1− δ, for any sequence of θ1,θ2, . . . ,θT , 1. xt(a) ⊤µ̃t(a) ≥ xt(a)⊤µ∗, for all arms a ∈ [K], for all time t. 2. xt(a) ⊤W̃t(a)θt ≤ xt(a)⊤W∗θt, for all arms a ∈ [K], for all time t. 3. | ∑T t=1(µ̃t(at)− µ∗)⊤xt(at)| ≤ ( 2m √ T log ((1+tm)/δ) log(T ) ) . 4. ‖ ∑T t=1(W̃t(at)−W∗)⊤xt(at)‖ ≤ ‖1d‖ ( 2m √ T log ((d+tmd)/δ) log(T ) ) . Essentially, the first two claims ensure that we have optimistic estimates, and the last two claims ensure that the estimates quickly converge to the true parameters. 3.2 The core algorithm In this section, we present an algorithm and its analysis, under the assumption that a parameter Z satisfying certain properties is given. Later, we show how to use the first T0 rounds to compute such a Z, and also bound the additional regret due to these T0 rounds. We define Z now. Assumption 1. Let Z be such that for some universal constants c, c′, OPTB ≤ Z ≤ cOPTB + c′. The algorithm constructs estimates µ̂t and Ŵt as in Section 3.1. It also runs the OL algorithm for an instance of the online learning problem. The vector played by the OL algorithm in time step t is θt. After observing the context, the optimistic estimates for each arm are then constructed using θt, as defined in (8) and (9). Intuitively, θt is used here as a multiplier to combine different columns of the weight matrix, to get an optimistic weight vector for every arm. An adjusted estimated reward for arm a is then defined by using Z to linearly combine the optimistic estimate of the reward with the optimistic estimate of the consumption, as (xt(a) ⊤µ̃t(a))− Z(xt(a)⊤W̃t(a)θt). The algorithm chooses the arm which appears to be the best according to the adjusted estimated reward. After observing the resulting reward and consumption vectors, the estimates are updated. The online learning algorithm is advanced by one step, by defining the profit vector to be vt(at) − BT 1. The algorithm ends either after T time steps or as soon as the total consumption exceeds the budget along some dimension. Theorem 2. Given a Z as per Assumption 1, Algorithm 1 achieves the following, with prob. 1− δ: regret(T ) ≤ O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . (Proof Sketch) We provide a sketch of the proof here, with a full proof given in Appendix E in the supplement. Let τ be the stopping time of the algorithm. The proof is in 3 steps: Step 1: Since E[vt(at)|Xt, at, Ht−1] = W⊤∗ xt(at), we apply Azuma-Hoeffding inequality to get that with high probability ∥ ∥ ∑τ t=1 vt(at)−W⊤∗ xt(at) ∥ ∥ ∞ is small. Therefore, we can work with ∑τ t=1 W ⊤ ∗ xt(at) instead of ∑τ t=1 vt(at). A similar application of Azuma-Hoeffding inequality is used to bound the gap | ∑τ t=1 rt(at) − µ⊤∗ xt(at)|, so that a lower bound on ∑τ t=1 µ ⊤ ∗ xt(at) is sufficient to lower bound the total reward ∑τ t=1 rt(at). Algorithm 1 Algorithm for linCBwK, with given Z Initialize θ1 as per the online learning (OL) algorithm. Initialize Z that satisfies Assumption 1. for all t = 1, ..., T do Observe Xt. For every a ∈ [K], compute µ̃t(a) and W̃t(a) as per (8) and (9) respectively. Play the arm at := argmaxa∈[K] xt(a) ⊤(µ̃t(a)− ZW̃t(a)θt). Observe rt(at) and vt(at). If for some j = 1..d, ∑ t′≤t vt′(at′) · ej ≥ B then EXIT. Use xt(at), rt(at) and vt(at) to obtain µ̂t+1, Ŵt+1 and Gt+1. Choose θt+1 using the OL update (refer to (5)) with gt(θt) := θt · ( vt(at)− BT 1 ) . end for Step 2: Using Corollary 2, with high probability, we can bound ∥ ∥ ∥ ∑T t=1(W∗ − W̃t(at))⊤xt(at) ∥ ∥ ∥ ∞ . It is therefore sufficient to work with the sum of vectors W̃t(at) ⊤xt(at) instead of W ⊤ ∗ xt(at), and similarly with µ̃t(at) ⊤xt(at) instead of µ ⊤ ∗ xt(at). Step 3: The proof is completed by showing the desired bound on OPT − ∑τ t=1 µ̃t(at) ⊤xt(at). This part is similar to the online stochastic packing problem; if the actual reward and consumption vectors were µ̃t(at) ⊤xt(at) and W̃t(at) ⊤xt(at), then it would be exactly that problem. We adapt techniques from [4]: use the OL algorithm and the Z parameter to combine constraints into the objective. If a dimension is being consumed too fast, then the multiplier for that dimension should increase, making the algorithm to pick arms that are not likely to consume too much along this dimension. Regret is then bounded by a combination of the online learning regret and the error in the optimistic estimates. 3.3 Algorithm with Z computation In this section, we present a modification of Algorithm 1 which computes the required parameter Z that satisfies Assumption 1, and therefore does not need to be provided with a Z as input. The algorithm computes Z using observations from the first T0 rounds. Once Z is computed, Algorithm 1 can be run for the remaining time steps. However, it needs to be modified slightly to take into account the budget consumed during the first T0 rounds. We handle this by using a smaller budget B′ = B − T0 in the computations for the remaining rounds. The modified algorithm is given below. Algorithm 2 Algorithm for linCBwK, with Z computation Inputs: B, T0, B ′ = B − T0 Using observations from first T0 rounds, compute a Z that satisfies Assumption 1. Run Algorithm 1 for T − T0 rounds and budget B′. Next, we provide the details of how to compute Z from observations in the first T0 rounds, and how to choose T0. We provide a method that takes advantage of the linear structure of the problem, and explores in the m-dimensional space of contexts and weight vectors to obtain bounds independent of K. In every round t = 1, . . . , T0, after observing Xt, let pt ∈ ∆[K] be pt := arg max p∈∆[K] ‖Xtp‖M−1t , (10) where Mt := I + ∑t−1 i=1(Xipi)(Xipi) ⊤. (11) Select arm at = a with probability pt(a). In fact, since Mt is a PSD matrix, due to convexity of the function ‖Xtp‖2M−1t , it is the same as playing at = argmaxa∈[K] ‖xt(a)‖M−1t . Construct estimates µ̂, Ŵt of µ∗,W∗ at time t as µ̂t := M −1 t ∑t−1 i=1(Xipi)ri(ai), Ŵt := M −1 t ∑t−1 i=1(Xipi)vi(ai) ⊤. And, for some value of γ defined later, obtain an estimate ˆOPT γ of OPT as: ˆOPT γ := maxπ T T0 ∑T0 i=1 µ̂ ⊤ i Xiπ(Xi) such that TT0 ∑T0 i=1 Ŵ ⊤ i Xiπ(Xi) ≤ B + γ. (12) For an intuition about the choice of arm in (10), observe from the discussion in Section 2.1 that every column w∗j of W∗ is guaranteed to lie inside the confidence ellipsoid centered at column ŵtj of Ŵt, namely the ellipsoid, ‖w − ŵtj‖2Mt ≤ 4m log(Tm/δ). Note that this ellipsoid has principle axes as eigenvectors of Mt, and the length of the semi-principle axes is given by the inverse eigenvalues of Mt. Therefore, by maximizing ‖Xtp‖M−1t we are choosing the context closest to the direction of the longest principal axis of the confidence ellipsoid, i.e. in the direction of the maximum uncertainty. Intuitively, this corresponds to pure exploration: by making an observation in the direction where uncertainty is large we can reduce the uncertainty in our estimate most effectively. A more algebraic explanation is as follows. In order to get a good estimate of OPT by ˆOPT γ , we want the estimates Ŵt and W∗ (and, µ̂ and µ∗) to be close enough so that ‖ ∑T0 t=1(Ŵt−Ŵ∗)⊤Xtπ(Xt)‖∞ (and, |∑T0t=1(µ̂t − µ∗)⊤Xtπ(Xt)|) is small for all policies π, and in particular for sample optimal policies. Now, using Cauchy-Schwartz these are bounded by ∑T0 t=1 ‖µ̂t − µ∗‖Mt‖Xtπ(Xt))‖M−1t , and ∑T0 t=1 ‖Ŵt −W∗‖Mt‖Xtπ(Xt))‖M−1t , where we define ‖W‖M , the M -norm of matrix W to be the max of column-wise M -norms. Using Lemma 2, the term ‖µ̂t−µ∗‖Mt is bounded by 2 √ m log(T0m/δ) , and ‖Ŵt−W∗‖Mt is bounded by 2 √ m log(T0md/δ), with probability 1−δ. Lemma 3 bounds the second term ∑T0 t=1 ‖Xtπ(Xt)‖M−1t but only when π is the played policy. This is where we use that the played policy pt was chosen to maximize ‖Xtpt‖M−1t , so that ∑T0 t=1 ‖Xtπ(Xt)‖M−1t ≤ ∑T0 t=1 ‖Xtpt‖M−1t and the bound ∑T0 t=1 ‖Xtpt‖M−1t ≤ √ mT0 log(T0) given by Lemma 3 actually bounds ∑T0 t=1 ‖Xtπ(Xt)‖M−1t for all π. Combining, we get a bound of 2m √ T0log(T0) log(T0d/δ) on deviations ‖ ∑T0 t=1(Ŵt − Ŵ∗) ⊤Xtπ(Xt)‖∞ and | ∑T0 t=1(µ̂t − µ∗)⊤Xtπ(Xt)| for all π. We prove the following lemma. Lemma 5. For γ = ( T T0 ) 2m √ T0log(T0) log(T0d/δ), with probability 1−O(δ), OPT − 2γ ≤ ˆOPT2γ ≤ OPT + 9γ(OPTB + 1). Corollary 3. Set Z = ( ˆOPT 2γ +2γ) B + 1, with the above value of γ. Then, with probability 1−O(δ), OPT B + 1 ≤ Z ≤ (1 + 11γ B )( OPT B + 1). Corollary 3 implies that as long as B ≥ γ, i.e., B ≥ Ω̃( mT√ T0 ), Z is a constant factor approximation of OPT B +1 ≥ Z∗, therefore Theorem 2 should provide an Õ ( (OPTB + 1)m √ T ) regret bound. However, this bound does not account for the budget consumed in the first T0 rounds. Considering that (at most) T0 amount can be consumed from the budget in the first T0 rounds, we have an additional regret of OPT B T0. Further, since we have B ′ = B − T0 budget for remaining T − T0 rounds, we need a Z that satisfies the required assumption for B′ instead of B (i.e., we need OPTB′ ≤ Z ≤ O(1) ( OPT B′ + 1 ) ). If B ≥ 2T0, then, B′ ≥ B/2, and using 2 times the Z computed in Corollary 3 would satisfy the required assumption. Together, these observations give Theorem 3. Theorem 3. Using Algorithm 2 with T0 such that B > max{2T0,mT/ √ T0}, and twice the Z given by Corollary 3, we get a high probability regret bound of Õ ( ( OPT B + 1 ) ( T0 +m √ T )) . In particular, for B > m1/2T 3/4 and m ≤ √ T , we can use T0 = m √ T to get a regret bound of Õ ( ( OPT B + 1 ) m √ T ) .
1. What is the focus of the paper in terms of the problem it addresses? 2. What are the key techniques used in the paper to tackle the issue? 3. What is the significance of the paper's contribution in the field of linear contextual bandits? 4. How does the reviewer assess the quality and impact of the paper's content? 5. Are there any suggestions or recommendations for improving the paper?
Review
Review In this paper the authors are considering linear contextual bandits, when there are constraints on resource consumption. The authors are combining techniques from previous well known techniques and tackle some of the issues that don't happen in the unstructured version of the problem. I liked the idea that the authors have set out to address. I can see that the authors put in hard work in putting this paper together and I commend them for their efforts. However, I do believe that this paper might require the addition of a few minor additions. I would suggest the following: The last paragraph of the introduction should introduce what the rest of the paper holds, a gist of the different sections of the rest of the paper essentially. I felt that the paper ends rather abruptly, so it might be really helpful to include a conclusions section before the references. I praise and appreciate that the authors used gender positive language such as "in every round ... she needs to" etc. I appreciate again the novel idea that the authors are trying to tackle. I wish them best of luck.
NIPS
Title Linear Contextual Bandits with Knapsacks Abstract We consider the linear contextual bandit problem with resource consumption, in addition to reward generation. In each round, the outcome of pulling an arm is a reward as well as a vector of resource consumptions. The expected values of these outcomes depend linearly on the context of that arm. The budget/capacity constraints require that the total consumption doesn’t exceed the budget for each resource. The objective is once again to maximize the total reward. This problem turns out to be a common generalization of classic linear contextual bandits (linContextual) [8, 11, 1], bandits with knapsacks (BwK) [3, 9], and the online stochastic packing problem (OSPP) [4, 14]. We present algorithms with near-optimal regret bounds for this problem. Our bounds compare favorably to results on the unstructured version of the problem [5, 10] where the relation between the contexts and the outcomes could be arbitrary, but the algorithm only competes against a fixed set of policies accessible through an optimization oracle. We combine techniques from the work on linContextual, BwK and OSPP in a nontrivial manner while also tackling new difficulties that are not present in any of these special cases. 1 Introduction In the contextual bandit problem [8, 2], the decision maker observes a sequence of contexts (or features). In every round she needs to pull one out of K arms, after observing the context for that round. The outcome of pulling an arm may be used along with the contexts to decide future arms. Contextual bandit problems have found many useful applications such as online recommendation systems, online advertising, and clinical trials, where the decision in every round needs to be customized to the features of the user being served. The linear contextual bandit problem [1, 8, 11] is a special case of the contextual bandit problem, where the outcome is linear in the feature vector encoding the context. As pointed by [2], contextual bandit problems represent a natural half-way point between supervised learning and reinforcement learning: the use of features to encode contexts and the models for the relation between these feature vectors and the outcome are often inherited from supervised learning, while managing the exploration-exploitation tradeoff is necessary to ensure good performance in reinforcement learning. The linear contextual bandit problem can thus be thought of as a midway between the linear regression model of supervised learning, and reinforcement learning. Recently, there has been a significant interest in introducing multiple “global constraints” in the standard bandit setting [9, 3, 10, 5]. Such constraints are crucial for many important real-world applications. For example, in clinical trials, the treatment plans may be constrained by the total availability of medical facilities, drugs and other resources. In online advertising, there are budget constraints that restrict the number of times an ad is shown. Other applications include dynamic pricing, dynamic procurement, crowdsourcing, etc.; see [9, 3] for many such examples. In this paper, we consider the linear contextual bandit with knapsacks (henceforth, linCBwK) problem. In this problem, the context vectors are generated i.i.d. in every round from some unknown distribution, and on picking an arm, a reward and a consumption vector is observed, which depend ∗Columbia University. [email protected]. †Microsoft Research. [email protected]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. linearly on the context vector. The aim of the decision maker is to maximize the total reward while ensuring that the total consumption of every resource remains within a given budget. Below, we give a more precise definition of this problem. We use the following notational convention throughout: vectors are denoted by bold face lower case letters, while matrices are denoted by regular face upper case letters. Other quantities such as sets, scalars, etc. may be of either case, but never bold faced. All vectors are column vectors, i.e., a vector in n dimensions is treated as an n× 1 matrix. The transpose of matrix A is A⊤. Definition 1 (linCBwK). There are K “arms”, which we identify with the set [K]. The algorithm is initially given as input a budget B ∈ R+. In every round t, the algorithm first observes context xt(a) ∈ [0, 1]m for every arm a, and then chooses an arm at ∈ [K], and finally observes a reward rt(at) ∈ [0, 1] and a d-dimensional consumption vector vt(at) ∈ [0, 1]d. The algorithm has a “no-op” option, which is to pick none of the arms and get 0 reward and 0 consumption. The goal of the algorithm is to pick arms such that the total reward ∑T t=1 rt(at) is maximized, while ensuring that the total consumption does not exceed the budget, i.e., ∑ t vt(at) ≤ B1. We make the following stochastic assumption for context, reward, and consumption vectors. In every round t, the tuple {xt(a), rt(a),vt(a)}Ka=1 is generated from an unknown distribution D, independent of everything in previous rounds. Also, there exists an unknown vector µ∗ ∈ [0, 1]m and a matrix W∗ ∈ [0, 1]m×d such that for every arm a, given contexts xt(a), and history Ht−1 before time t, E[rt(a)|xt(a), Ht−1] = µ⊤∗ xt(a), E[vt(a)|xt(a), Ht−1] = W⊤∗ xt(a). (1) For succinctness, we will denote the tuple of contexts for K arms at time t as matrix Xt ∈ [0, 1]m×K , with xt(a) being the a th column of this matrix. Similarly, rewards and consumption vectors at time t are represented as the vector rt ∈ [0, 1]K and the matrix Vt ∈ [0, 1]d×K respectively. As we discuss later in the text, the assumption in equation (1) forms the primary distinction between our linear contextual bandit setting and the general contextual bandit setting considered in [5]. Exploiting this linearity assumption will allow us to generate regret bounds which do not depend on the number of arms K, rendering it to be especially useful when the number of arms is large. Some examples of this include recommendation systems with large number of products (e.g., retail products, travel packages, ad creatives, sponsored facebook posts). Another advantage over using the general contextual bandit setting of [5] is that we don’t need an oracle access to a certain optimization problem, which in this case is required to solve an NP-Hard problem. (See Section 1.1 for a more detailed discussion.) We compare the performance of an algorithm to that of an optimal adaptive policy that knows the distribution D and the parameters (µ∗,W∗), and can take into account the history up to that point, as well as the current context, to decide (possibly with randomization) which arm to pull at time t. However, it is easier to work with an upper bound on this, which is the optimal expected reward of a static policy that is required to satisfy the constraints only in expectation. This technique has been used in several related problems and is standard by now [14, 9]. Definition 2 (Optimal Static Policy). A context-dependent non-adaptive policy π is a mapping from context space [0, 1]m×K to Ω = {p ∈ [0, 1]K : ‖p‖1 ≤ 1}, where π(X)i denotes the probability of playing arm i when the context is X , and 1−∑Ki=1 π(X)i is the probability of no-op. Define r(π) and v(π) to be the expected reward and consumption vector of policy π, respectively, i.e. r(π) := E(X,r,V )∼D[rπ(X)] = EX∼D[µ ⊤ ∗ Xπ(X)]. (2) v(π) := E(X,r,V )∼D[V π(X)] = EX∼D[W ⊤ ∗ Xπ(X)]. (3) Let π∗ := argmaxπ T r(π) such that T v(π) ≤ B1 (4) be the optimal static policy. Note that since no-op is allowed, a feasible policy always exists. We denote the value of this optimal static policy by OPT := T r(π∗). The following lemma proves that OPT upper bounds the value of an optimal adaptive policy. Proof is in Appendix B in the supplement. Lemma 1. Let OPT denote the value of an optimal adaptive policy that knows the distribution D and parameters µ∗,W∗. Then OPT ≥ OPT. Definition 3 (Regret). Let at be the arm played at time t by the algorithm. Then, regret is defined as regret(T ) := OPT − T ∑ t=1 rt(at). 1.1 Main results Our main result is an algorithm with near-optimal regret bound for linCBwK. Theorem 1. There is an algorithm for linCBwK such that if B > m1/2T 3/4, then with probability at least 1− δ, regret(T ) = O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . Relation to general contextual bandits. There have been recent papers [5, 10] that solve problems similar to linCBwK but for general contextual bandits. In these papers the relation between contexts and outcome vectors is arbitrary and the algorithms compete with an arbitrary fixed set of context dependent policies Π accessible via an optimization oracle, with regret bounds being O ( (OPTB + 1) √ KT log(dT |Π|/δ) ) . These approaches could potentially be applied to the linear setting using a set Π of linear context dependent policies. Comparing their bounds with ours, in our results, essentially a √ K log(|Π|) factor is replaced by a factor of m. Most importantly, we have no dependence on K,3 which enables us to consider problems with large action spaces. Further, suppose that we want to use their result with the set of linear policies, i.e., policies of the form, for some fixed θ ∈ ℜm, arg max a∈[K] {xt(a)⊤θ}. Then, their algorithms would require access to an “Arg-Max Oracle” that can find the best such policy (maximizing total reward) for a given set of contexts and rewards (no resource consumption). In fact, by a reduction from the problem of learning halfspaces with noise [16], we can show that the optimization problem underlying such an “Arg-Max Oracle” problem is NP-Hard, making such an approach computationally expensive. The proof of this is in Appendix C in the supplement. The only downside to our results is that we need the budget B to be Ω(m1/2T 3/4). Getting similar bounds for budgets as small as B = Θ(m √ T ) is an interesting open problem. (This also indicates that this is indeed a harder problem than all the special cases.) Near-optimality of regret bounds. In [12], it was shown that for the linear contextual bandits problem, no online algorithm can achieve a regret bound better than Ω(m √ T ). In fact, they prove this lower bound for linear contextual bandits with static contexts. Since that problem is a special case of the linCBwK problem with d = 1, this shows that the dependence on m and T in the above regret bound is optimal upto log factors. For general contextual bandits with resource constraints, the bounds of [5, 10] are near optimal. Relation to BwK [3] and OSPP [4]. It is easy to see that the linCBwK problem is a generalization of the linear contextual bandits problem [1, 8, 11]. There, the outcome is scalar and the goal is to simply maximize the sum of these. Remarkably, the linCBwK problem also turns out to be a common generalization of the bandits with knapsacks (BwK) problem considered in [9, 3], and the online stochastic packing problem (OSPP) studied by [13, 6, 15, 14, 4]. In both BwK and OSPP, the outcome of every round t is a reward rt and a vector vt and the goal of the algorithm is to maximize ∑T t=1 rt while ensuring that ∑T t=1 vt ≤ B1. The problems differ in how these rewards and vectors are picked. In the OSPP problem, in every round t, the algorithm may pick any reward,vector pair from a given set At of d + 1-dimensional vectors. The set At is drawn i.i.d. from an unknown distribution over sets of vectors. This corresponds to the special case of linCBwK, where m = d+ 1 and the context xt(a) itself is equal to (rt(a),vt(a)). In the BwK problem, there is a fixed set of arms, and for each arm there is an unknown distribution over reward,vector pairs. The algorithm picks an arm and a reward,vector pair is drawn from the corresponding distribution for that arm. This 3Similar to the regret bounds for linear contextual bandits [8, 1, 11]. corresponds to the special case of linCBwK, where m = K and the context Xt = I, the identity matrix, for all t. We use techniques from all three special cases: our algorithms follow the primal-dual paradigm and use an online learning algorithm to search the dual space, as was done in [3]. In order to deal with linear contexts, we use techniques from [1, 8, 11] to estimate the weight matrix W∗, and define “optimistic estimates” of W∗. We also use the technique of combining the objective and the constraints using a certain tradeoff parameter and that was introduced in [4]. Further new difficulties arise, such as in estimating the optimum value from the first few rounds, a task that follows from standard techniques in each of the special cases but is very challenging here. We develop a new way of exploration that uses the linear structure, so that one can evaluate all possible choices that could have led to an optimum solution on the historic sample. This technique might be of independent interest in estimating optimum values. One can see that the problem is indeed more than the sum of its parts, from the fact that we get the optimal bound for linCBwK only when B ≥ Ω̃(m1/2T 3/4), unlike either special case for which the optimal bound holds for all B (but is meaningful only for B = Ω̃(m √ T )). The approach in [3] (for BwK) extends to the case of “static” contexts,4 where each arm has a context that doesn’t change over time. The OSPP of [4] is not a special case of linCBwK with static contexts; this is one indication of the additional difficulty of dynamic over static contexts. Other related work. Recently, [17] showed an O( √ T ) regret in the linear contextual setting with a single budget constraint, when costs depend only on contexts and not arms. Due to space constraints, we have moved many proofs from the main part of the paper to the supplement. 2 Preliminaries 2.1 Confidence Ellipsoid Consider a stochastic process which in each round t, generates a pair of observations (rt,yt), such that rt is an unknown linear function of yt plus some 0-mean bounded noise, i.e., rt = µ ⊤ ∗ yt + ηt, where yt,µ∗ ∈ Rm, |ηt| ≤ 2R, and E[ηt|y1, r1, . . . ,yt−1, rt−1,yt] = 0. At any time t, a high confidence estimate of the unknown vector µ∗ can be obtained by building a “confidence ellipsoid” around the ℓ2-regularized least-squares estimate µ̂t constructed from the observations made so far. This technique is common in prior work on linear contextual bandits (e.g., in [8, 11, 1]). For any regularization parameter λ > 0, let Mt := λI + ∑t−1 i=1 yiy ⊤ i , and µ̂t := M −1 t ∑t−1 i=1 yiri. The following result from [1] shows that µ∗ lies with high probability in an ellipsoid with center µ̂t. For any positive semi-definite (PSD) matrix M, define the M -norm as ‖µ‖M := √ µ⊤Mµ. The confidence ellipsoid at time t is defined as Ct := { µ ∈ Rm : ‖µ− µ̂t‖Mt ≤ R √ m log ((1+tm/λ)/δ) + √ λm } . Lemma 2 (Theorem 2 of [1]). If ∀ t, ‖µ∗‖2 ≤ √ m and ‖yt‖2 ≤ √ m, then with prob. 1 − δ, µ∗ ∈ Ct. Another useful observation about this construction is stated below. It first appeared as Lemma 11 of [8], and was also proved as Lemma 3 in [11]. Lemma 3 (Lemma 11 of [8]). ∑T t=1 ‖yt‖M−1t ≤ √ mT log(T ). As a corollary of the above two lemmas, we obtain a bound on the total error in the estimate provided by “any point” from the confidence ellipsoid. (Proof is in Appendix D in the supplement.) 4It was incorrectly claimed in [3] that the approach can be extended to dynamic contexts without much modifications. Corollary 1. For t = 1, . . . , T , let µ̃t ∈ Ct be a point in the confidence ellipsoid, with λ = 1 and 2R = 1. Then, with probability 1− δ, ∑T t=1 |µ̃⊤t yt − µ⊤∗ yt| ≤ 2m √ T log ((1+Tm)/δ) log(T ). 2.2 Online Learning Consider a T round game played between an online learner and an adversary, where in round t, the learner chooses a θt ∈ Ω := {θ : ‖θ‖1 ≤ 1,θ ≥ 0}, and then observes a linear function gt : Ω → [−1, 1] picked by the adversary. The learner’s choice θt may only depend on learner’s and adversary’s choices in previous rounds. The goal of the learner is to minimize regret defined as the difference between the learner’s objective value and the value of the best single choice in hindsight: R(T ) := maxθ∈Ω ∑T t=1 gt(θ)− ∑T t=1 gt(θt). The multiplicative weight update (MWU) algorithm (generalization by [7]) is a fast and efficient online learning algorithm for this problem. Let gt,j := gt(1j). Then, given a parameter ǫ > 0, in round t+ 1, the choice of this algorithm takes the following form, θt+1,j = wt,j 1 + ∑ j wt,j , where wt,j = { wt−1,j(1 + ǫ) gt,j if gt,j > 0, wt−1,j(1− ǫ)−gt,j if gt,j ≤ 0. (5) with initialization w0,j = 1, for all j = 1, . . . ,K. Lemma 4. [7] For any 0 < ǫ ≤ 12 , the MWU algorithm provides the following regret bound for the online learning problem described above: R(T ) ≤ ǫT + log(d+1)ǫ . In particular, for ǫ = √ log(d+1) T , we have R(T ) ≤ √ log(d+ 1)T For the rest of the paper, we refer to the MWU algorithm with ǫ = √ log(d+1) T as the online learning (OL) algorithm, and the update in (5) as the OL update at time t+ 1. 3 Algorithm 3.1 Optimistic estimates of unknown parameters Let at denote the arm played by the algorithm at time t. In the beginning of every round, we use the outcomes and contexts from previous rounds to construct a confidence ellipsoid for µ∗ and every column of W∗. The construction of confidence ellipsoid for µ∗ follows directly from the techniques in Section 2.1 with yt = xt(at) and rt being reward at time t. To construct a confidence ellipsoid for a column j of W∗, we use the techniques in Section 2.1 while substituting yt = xt(at) and rt = vt(at)j for every j. As in Section 2.1, let Mt := I + ∑t−1 i=1 xi(ai)xi(ai) ⊤, and construct the regularized least squares estimate for µ∗,W∗, respectively, as µ̂t := M −1 t ∑t−1 i=1 xi(ai)ri(ai) ⊤ (6) Ŵt := M −1 t ∑t−1 i=1 xi(ai)vi(ai) ⊤. (7) Define confidence ellipsoid for parameter µ∗ as Ct,0 := { µ ∈ Rm : ‖µ− µ̂‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and for every arm a, the optimistic estimate of µ∗ as: µ̃t(a) := argmaxµ∈Ct,0 xt(a) ⊤µ. (8) Let wj denote the j th column of a matrix W . We define a confidence ellipsoid for each column j, as Ct,j := { w ∈ Rm : ‖w − ŵtj‖Mt ≤ √ m log ((d+tmd)/δ) + √ m } , and denote by Gt, the Cartesian product of all these ellipsoids: Gt := {W ∈ Rm×d : wj ∈ Ct,j}. Note that Lemma 2 implies that W∗ ∈ Gt with probability 1− δ. Now, given a vector θt ∈ Rd, we define the optimistic estimate of the weight matrix at time t w.r.t. θt, for every arm a ∈ [K], as : W̃t(a) := argminW∈Gt xt(a) ⊤Wθt. (9) Intuitively, for the reward, we want an upper confidence bound and for the consumption we want a lower confidence bound as an optimistic estimate. This intuition aligns with the above definitions, where the maximizer was used in case of reward and a minimizer was used for consumption. The utility and precise meaning of θt will become clearer when we describe the algorithm and present the regret analysis. Using the definition of µ̃t, W̃t, along with the results in Lemma 2 and Corollary 1 about confidence ellipsoids, the following can be derived. Corollary 2. With probability 1− δ, for any sequence of θ1,θ2, . . . ,θT , 1. xt(a) ⊤µ̃t(a) ≥ xt(a)⊤µ∗, for all arms a ∈ [K], for all time t. 2. xt(a) ⊤W̃t(a)θt ≤ xt(a)⊤W∗θt, for all arms a ∈ [K], for all time t. 3. | ∑T t=1(µ̃t(at)− µ∗)⊤xt(at)| ≤ ( 2m √ T log ((1+tm)/δ) log(T ) ) . 4. ‖ ∑T t=1(W̃t(at)−W∗)⊤xt(at)‖ ≤ ‖1d‖ ( 2m √ T log ((d+tmd)/δ) log(T ) ) . Essentially, the first two claims ensure that we have optimistic estimates, and the last two claims ensure that the estimates quickly converge to the true parameters. 3.2 The core algorithm In this section, we present an algorithm and its analysis, under the assumption that a parameter Z satisfying certain properties is given. Later, we show how to use the first T0 rounds to compute such a Z, and also bound the additional regret due to these T0 rounds. We define Z now. Assumption 1. Let Z be such that for some universal constants c, c′, OPTB ≤ Z ≤ cOPTB + c′. The algorithm constructs estimates µ̂t and Ŵt as in Section 3.1. It also runs the OL algorithm for an instance of the online learning problem. The vector played by the OL algorithm in time step t is θt. After observing the context, the optimistic estimates for each arm are then constructed using θt, as defined in (8) and (9). Intuitively, θt is used here as a multiplier to combine different columns of the weight matrix, to get an optimistic weight vector for every arm. An adjusted estimated reward for arm a is then defined by using Z to linearly combine the optimistic estimate of the reward with the optimistic estimate of the consumption, as (xt(a) ⊤µ̃t(a))− Z(xt(a)⊤W̃t(a)θt). The algorithm chooses the arm which appears to be the best according to the adjusted estimated reward. After observing the resulting reward and consumption vectors, the estimates are updated. The online learning algorithm is advanced by one step, by defining the profit vector to be vt(at) − BT 1. The algorithm ends either after T time steps or as soon as the total consumption exceeds the budget along some dimension. Theorem 2. Given a Z as per Assumption 1, Algorithm 1 achieves the following, with prob. 1− δ: regret(T ) ≤ O ( (OPTB + 1)m √ T log(dT/δ) log(T ) ) . (Proof Sketch) We provide a sketch of the proof here, with a full proof given in Appendix E in the supplement. Let τ be the stopping time of the algorithm. The proof is in 3 steps: Step 1: Since E[vt(at)|Xt, at, Ht−1] = W⊤∗ xt(at), we apply Azuma-Hoeffding inequality to get that with high probability ∥ ∥ ∑τ t=1 vt(at)−W⊤∗ xt(at) ∥ ∥ ∞ is small. Therefore, we can work with ∑τ t=1 W ⊤ ∗ xt(at) instead of ∑τ t=1 vt(at). A similar application of Azuma-Hoeffding inequality is used to bound the gap | ∑τ t=1 rt(at) − µ⊤∗ xt(at)|, so that a lower bound on ∑τ t=1 µ ⊤ ∗ xt(at) is sufficient to lower bound the total reward ∑τ t=1 rt(at). Algorithm 1 Algorithm for linCBwK, with given Z Initialize θ1 as per the online learning (OL) algorithm. Initialize Z that satisfies Assumption 1. for all t = 1, ..., T do Observe Xt. For every a ∈ [K], compute µ̃t(a) and W̃t(a) as per (8) and (9) respectively. Play the arm at := argmaxa∈[K] xt(a) ⊤(µ̃t(a)− ZW̃t(a)θt). Observe rt(at) and vt(at). If for some j = 1..d, ∑ t′≤t vt′(at′) · ej ≥ B then EXIT. Use xt(at), rt(at) and vt(at) to obtain µ̂t+1, Ŵt+1 and Gt+1. Choose θt+1 using the OL update (refer to (5)) with gt(θt) := θt · ( vt(at)− BT 1 ) . end for Step 2: Using Corollary 2, with high probability, we can bound ∥ ∥ ∥ ∑T t=1(W∗ − W̃t(at))⊤xt(at) ∥ ∥ ∥ ∞ . It is therefore sufficient to work with the sum of vectors W̃t(at) ⊤xt(at) instead of W ⊤ ∗ xt(at), and similarly with µ̃t(at) ⊤xt(at) instead of µ ⊤ ∗ xt(at). Step 3: The proof is completed by showing the desired bound on OPT − ∑τ t=1 µ̃t(at) ⊤xt(at). This part is similar to the online stochastic packing problem; if the actual reward and consumption vectors were µ̃t(at) ⊤xt(at) and W̃t(at) ⊤xt(at), then it would be exactly that problem. We adapt techniques from [4]: use the OL algorithm and the Z parameter to combine constraints into the objective. If a dimension is being consumed too fast, then the multiplier for that dimension should increase, making the algorithm to pick arms that are not likely to consume too much along this dimension. Regret is then bounded by a combination of the online learning regret and the error in the optimistic estimates. 3.3 Algorithm with Z computation In this section, we present a modification of Algorithm 1 which computes the required parameter Z that satisfies Assumption 1, and therefore does not need to be provided with a Z as input. The algorithm computes Z using observations from the first T0 rounds. Once Z is computed, Algorithm 1 can be run for the remaining time steps. However, it needs to be modified slightly to take into account the budget consumed during the first T0 rounds. We handle this by using a smaller budget B′ = B − T0 in the computations for the remaining rounds. The modified algorithm is given below. Algorithm 2 Algorithm for linCBwK, with Z computation Inputs: B, T0, B ′ = B − T0 Using observations from first T0 rounds, compute a Z that satisfies Assumption 1. Run Algorithm 1 for T − T0 rounds and budget B′. Next, we provide the details of how to compute Z from observations in the first T0 rounds, and how to choose T0. We provide a method that takes advantage of the linear structure of the problem, and explores in the m-dimensional space of contexts and weight vectors to obtain bounds independent of K. In every round t = 1, . . . , T0, after observing Xt, let pt ∈ ∆[K] be pt := arg max p∈∆[K] ‖Xtp‖M−1t , (10) where Mt := I + ∑t−1 i=1(Xipi)(Xipi) ⊤. (11) Select arm at = a with probability pt(a). In fact, since Mt is a PSD matrix, due to convexity of the function ‖Xtp‖2M−1t , it is the same as playing at = argmaxa∈[K] ‖xt(a)‖M−1t . Construct estimates µ̂, Ŵt of µ∗,W∗ at time t as µ̂t := M −1 t ∑t−1 i=1(Xipi)ri(ai), Ŵt := M −1 t ∑t−1 i=1(Xipi)vi(ai) ⊤. And, for some value of γ defined later, obtain an estimate ˆOPT γ of OPT as: ˆOPT γ := maxπ T T0 ∑T0 i=1 µ̂ ⊤ i Xiπ(Xi) such that TT0 ∑T0 i=1 Ŵ ⊤ i Xiπ(Xi) ≤ B + γ. (12) For an intuition about the choice of arm in (10), observe from the discussion in Section 2.1 that every column w∗j of W∗ is guaranteed to lie inside the confidence ellipsoid centered at column ŵtj of Ŵt, namely the ellipsoid, ‖w − ŵtj‖2Mt ≤ 4m log(Tm/δ). Note that this ellipsoid has principle axes as eigenvectors of Mt, and the length of the semi-principle axes is given by the inverse eigenvalues of Mt. Therefore, by maximizing ‖Xtp‖M−1t we are choosing the context closest to the direction of the longest principal axis of the confidence ellipsoid, i.e. in the direction of the maximum uncertainty. Intuitively, this corresponds to pure exploration: by making an observation in the direction where uncertainty is large we can reduce the uncertainty in our estimate most effectively. A more algebraic explanation is as follows. In order to get a good estimate of OPT by ˆOPT γ , we want the estimates Ŵt and W∗ (and, µ̂ and µ∗) to be close enough so that ‖ ∑T0 t=1(Ŵt−Ŵ∗)⊤Xtπ(Xt)‖∞ (and, |∑T0t=1(µ̂t − µ∗)⊤Xtπ(Xt)|) is small for all policies π, and in particular for sample optimal policies. Now, using Cauchy-Schwartz these are bounded by ∑T0 t=1 ‖µ̂t − µ∗‖Mt‖Xtπ(Xt))‖M−1t , and ∑T0 t=1 ‖Ŵt −W∗‖Mt‖Xtπ(Xt))‖M−1t , where we define ‖W‖M , the M -norm of matrix W to be the max of column-wise M -norms. Using Lemma 2, the term ‖µ̂t−µ∗‖Mt is bounded by 2 √ m log(T0m/δ) , and ‖Ŵt−W∗‖Mt is bounded by 2 √ m log(T0md/δ), with probability 1−δ. Lemma 3 bounds the second term ∑T0 t=1 ‖Xtπ(Xt)‖M−1t but only when π is the played policy. This is where we use that the played policy pt was chosen to maximize ‖Xtpt‖M−1t , so that ∑T0 t=1 ‖Xtπ(Xt)‖M−1t ≤ ∑T0 t=1 ‖Xtpt‖M−1t and the bound ∑T0 t=1 ‖Xtpt‖M−1t ≤ √ mT0 log(T0) given by Lemma 3 actually bounds ∑T0 t=1 ‖Xtπ(Xt)‖M−1t for all π. Combining, we get a bound of 2m √ T0log(T0) log(T0d/δ) on deviations ‖ ∑T0 t=1(Ŵt − Ŵ∗) ⊤Xtπ(Xt)‖∞ and | ∑T0 t=1(µ̂t − µ∗)⊤Xtπ(Xt)| for all π. We prove the following lemma. Lemma 5. For γ = ( T T0 ) 2m √ T0log(T0) log(T0d/δ), with probability 1−O(δ), OPT − 2γ ≤ ˆOPT2γ ≤ OPT + 9γ(OPTB + 1). Corollary 3. Set Z = ( ˆOPT 2γ +2γ) B + 1, with the above value of γ. Then, with probability 1−O(δ), OPT B + 1 ≤ Z ≤ (1 + 11γ B )( OPT B + 1). Corollary 3 implies that as long as B ≥ γ, i.e., B ≥ Ω̃( mT√ T0 ), Z is a constant factor approximation of OPT B +1 ≥ Z∗, therefore Theorem 2 should provide an Õ ( (OPTB + 1)m √ T ) regret bound. However, this bound does not account for the budget consumed in the first T0 rounds. Considering that (at most) T0 amount can be consumed from the budget in the first T0 rounds, we have an additional regret of OPT B T0. Further, since we have B ′ = B − T0 budget for remaining T − T0 rounds, we need a Z that satisfies the required assumption for B′ instead of B (i.e., we need OPTB′ ≤ Z ≤ O(1) ( OPT B′ + 1 ) ). If B ≥ 2T0, then, B′ ≥ B/2, and using 2 times the Z computed in Corollary 3 would satisfy the required assumption. Together, these observations give Theorem 3. Theorem 3. Using Algorithm 2 with T0 such that B > max{2T0,mT/ √ T0}, and twice the Z given by Corollary 3, we get a high probability regret bound of Õ ( ( OPT B + 1 ) ( T0 +m √ T )) . In particular, for B > m1/2T 3/4 and m ≤ √ T , we can use T0 = m √ T to get a regret bound of Õ ( ( OPT B + 1 ) m √ T ) .
1. What is the focus of the paper in terms of the linear contextual bandit problem? 2. What are the unique assumptions made in the paper regarding the reward and consumption generation? 3. What is the main contribution of the paper in terms of the proposed algorithm and its regret bound? 4. What are the strengths of the paper in terms of its writing quality and proof sketches? 5. What are the weaknesses of the paper, particularly in terms of the budget requirement and its comparison with other works?
Review
Review This paper considers linear contextual bandit problem with budgeted resources (knapsack problem). Different from previous works, this paper assumes linearity on both reward and consumption. Based on such assumption, authors utilize the state-of-the-art techniques from linear contextual bandits, bandits with knapsacks and OSPP, prove that with high probability, algorithm proposed in this paper (linCBwK) could achieve O(m \sqrt{\log(T) T}) regret bound with budget needs of \Omega{mT^{3/4}}. The problem is motivated by an novel assumption of linear reward and consumption generation. The paper is well written and pleasant to read. Authors prove that their proposed algorithm could achieve a near optimal regret bound. The sketch of the proof is clear and seems solid to me (however I did not check every proof in detail). Although the regret bound of the prosoed algorithm is sound and promising, authors also mentioned that requirment of budget is \Omega{mT^{3/4}}, which ideally could be \Omega{mT^{1/2}}. The skech of proof is mostly following [5], and it would be better if authors could provide more intuition and analysis regarding why this budget requirement is inevitable under such framework. *** more intuition and comparison with [5] is provided in rebuttal. ***
NIPS
Title On Path Integration of Grid Cells: Group Representation and Isotropic Scaling Abstract Understanding how grid cells perform path integration calculations remains a fundamental problem. In this paper, we conduct theoretical analysis of a general representation model of path integration by grid cells, where the 2D self-position is encoded as a higher dimensional vector, and the 2D self-motion is represented by a general transformation of the vector. We identify two conditions on the transformation. One is a group representation condition that is necessary for path integration. The other is an isotropic scaling condition that ensures locally conformal embedding, so that the error in the vector representation translates conformally to the error in the 2D self-position. Then we investigate the simplest transformation, i.e., the linear transformation, uncover its explicit algebraic and geometric structure as matrix Lie group of rotation, and explore the connection between the isotropic scaling condition and a special class of hexagon grid patterns. Finally, with our optimization-based approach, we manage to learn hexagon grid patterns that share similar properties of the grid cells in the rodent brain. The learned model is capable of accurate long distance path integration. Code is available at https://github.com/ruiqigao/grid-cell-path. 1 Introduction Imagine walking in the darkness. Purely based on the sense of self-motion, one can gain a sense of self-position by integrating the self motion - a process often referred to as path integration [10, 14, 21, 15, 27]. While the exact neural underpinning of path integration remains unclear, it has been hypothesized that the grid cells [21, 17, 40, 24, 23, 12] in the mammalian medial entorhinal cortex (mEC) may be involved in this process [20, 30, 22]. The grid cells are so named because individual neurons exhibit striking firing patterns that form hexagonal grids when the agent (such as a rat) navigates in a 2D open field [18, 21, 16, 6, 34, 5, 7, 11, 29, 1]. The grid cells also interact with the place cells in the hippocampus [28]. Unlike a grid cell that fires at the vertices of a lattice, a place cell often fires at a single (or a few) locations. The purpose of this paper is to understand how the grid cells may perform path integration calculations. We study a general optimization-based representational model in which the 2D self-position is ∗The author is now a Research Scientist at Google Brain team. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). represented by a higher dimensional vector and the 2D self-motion is represented by a transformation of the vector. The vector representation can be considered position encoding or position embedding, where the elements of the vector may be interpreted as activities of a population of grid cells. The transformation can be realized by a recurrent network that acts on the vector. Our focus is to study the properties of the transformation. Specifically, we identify two conditions for the transformation: a group representation condition and an isotropic scaling condition, under which we demonstrate that the local neighborhood around each self-position in the 2D physical space is embedded conformally as a 2D neighborhood around the vector representation of the self-position in the neural space. We then investigate the simplest special case of the transformation, i.e., linear transformation, that forms a matrix Lie group of rotation, under which case we show that the isotropic scaling condition is connected to a special class of hexagonal grid patterns. Our numerical experiments demonstrate that our model learns clear hexagon grid patterns of multiple scales which share observed properties of the grid cells in the rodent brain, by optimizing a simple loss function. The learned model is also capable of accurate long distance path integration. Contributions. Our work contributes to understanding the grid cells from the perspective of representation learning. We conduct theoretical analysis of (1) general transformation for path integration by identifying two key conditions and a local conformal embedding property, (2) linear transformation by revealing the algebraic and geometric structure and connecting the isotropic scaling condition and a special class of hexagon grid patterns, and (3) integration of linear transformation model and linear basis expansion model via unitary group representation theory. Experimentally we learn clear hexagon grid patterns that are consistent with biological observations, and the learned model is capable of accurate path integration. 2 General transformation 2.1 Position embedding Consider an agent (e.g., a rat) navigating within a 2D open field. Let x = (x1,x2) be the selfposition of the agent. We assume that the selfposition x in the 2D physical space is represented by the response activities of a population of d neurons (e.g., d = 200), which form a vector v(x) = (vi(x), i = 1, ...,d)> in the ddimensional “neural space”, with each element vi(x) representing the firing rate of one neuron when the animal is at location x. v(x) can be called position encoding or position embedding. Collectively, (v(x),∀x) forms a codebook of x ∈ R2, and (v(x),∀x) is a 2D manifold in the d-dimensional neural space, i.e., globally we embed R2 as a 2D manifold in the neural space. Locally, we identify two condi- tions under which the 2D local neighborhood around each x is embedded conformally as a 2D neighborhood around v(x) with a scaling factor. See Fig. 1. As shown in Section 3.3, the conformal embedding is connected to the hexagon grid patterns. 2.2 Transformation and path integration At self-position x, if the agent makes a self-motion ∆x = (∆x1,∆x2), then it moves to x+∆x. Correspondingly, the vector representation v(x) is transformed to v(x+∆x). The general form of the transformation can be formulated as: v(x+∆x) = F(v(x),∆x). (1) The transformation F(·,∆x) can be considered a representation of ∆x, which forms a 2D additive group. We call Eq. (1) the transformation model. It can be implemented by a recurrent network to derive a path integration model: if the agent starts from x0, and makes a sequence of moves (∆xt , t = 1, ...,T ), then the vector is updated by vt = F(vt−1,∆xt), where v0 = v(x0), and t = 1, ...,T . 2.3 Group representation condition The solution to the transformation model (Eq. (1)) should satisfy the following condition. Condition 1. (Group representation condition) (v(x),∀x) and (F(·,∆x),∀∆x) form a representation of the 2D additive Euclidean group R2 in the sense that F(v(x),0) = v(x), ∀x; (2) F(v(x),∆x1 +∆x2) = F(F(v(x),∆x1),∆x2), ∀x,∆x1,∆x2. (3) (F(·,∆x),∀∆x) is a Lie group of transformations acting on the codebook manifold (v(x),∀x). The reason for (2) is that if ∆x = 0, then F(·,0) should be the identity transformation. Thus the codebook manifold (v(x),∀x) consists of fixed points of the transformation F(·,0). If F(·,0) is furthermore a contraction around (v(x),∀x), then (v(x),∀x) are the attractor points. The reason for (3) is that the agent can move in one step by ∆x1 +∆x2, or first move by ∆x1, and then move by ∆x2. Both paths would end up at the same x+∆x1 +∆x2, which is represented by the same v(x+∆x1 +∆x2). The group representation condition is a necessary self-consistent condition for the transformation model (Eq. (1)). 2.4 Egocentric self-motion Self-motion ∆x can also be parametrized egocentrically as (∆r,θ), where ∆r is the displacement along the direction θ ∈ [0,2π], so that ∆x= (∆x1 = ∆r cosθ ,∆x2 = ∆r sinθ). The egocentric self-motion may be more biologically plausible where θ is encoded by head direction, and ∆r can be interpreted as the speed along direction θ . The transformation model then becomes v(x+∆x) = F(v(x),∆r,θ), (4) where we continue to use F(·) for the transformation (with slight abuse of notation). (∆r,θ) form a polar coordinate system around x. 2.5 Infinitesimal self-motion and directional derivative In this subsection, we derive the transformation model for infinitesimal self-motion. While we use ∆x or ∆r to denote finite (non-infinitesimal) self-motion, we use δx or δ r to denote infinitesimal self-motion. At self-position x, for an infinitesimal displacement δ r along direction θ , δx= (δx1 = δ r cosθ ,δx2 = δ r sinθ). See Fig. 1 (a) for an illustration. Given that δ r is infinitesimal, for any fixed θ , a first order Taylor expansion of F(v(x),δ r,θ) with respect to δ r gives us v(x+δx) = F(v(x),δ r,θ) = F(v(x),0,θ)+F ′(v(x),0,θ)δ r+o(δ r) = v(x)+ fθ (v(x))δ r+o(δ r), (5) where F(v(x),0,θ) = v(x) according to Condition 1, and fθ (v(x)) := F ′(v(x),0,θ) is the first derivative of F(v(x),∆r,θ) with respect to ∆r at ∆r = 0. fθ (v(x)) is the directional derivative of F(·) at self-position x and direction θ . For a fixed θ , (F(·,∆r,θ),∀∆r) forms a one-parameter Lie group of transformations, and fθ (·) is the generator of its Lie algebra. 2.6 Isotropic scaling condition With the directional derivative, we define the second condition as follows, which leads to locally conformal embedding and is connected to hexagon grid pattern. Condition 2. (Isotropic scaling condition) For any fixed x, ‖ fθ (v(x))‖ is constant over θ . Let f0(v(x)) denote fθ (v(x)) for θ = 0, and fπ/2(v(x)) denote fθ (v(x)) for θ = π/2. Then we have the following theorem: Theorem 1. Assume group representation condition 1 and isotropic scaling condition 2. At any fixed x, for the local motion δx= (δ r cosθ ,δ r sinθ) around x, let δv = v(x+δx)−v(x) be the change of vector and s = ‖ fθ (v(x))‖, then we have ‖δv‖= s‖δx‖. Moreover, δv = fθ (v(x))δ r+o(δ r) = f0(v(x))δ r cosθ + fπ/2(v(x))δ r sinθ +o(δ r), (6) where f0(v(x)) and fπ/2(v(x)) are two orthogonal basis vectors of equal norm s. See Supplementary for a proof and Fig. 1(b) for an illustration. Theorem 1 indicates that the local 2D polar system around self-position x in the 2D physical space is embedded conformally as a 2D polar system around vector v(x) in the d-dimensional neural space, with a scaling factor s (our analysis is local for any fixed x, and s may depend on x). Conformal embedding is a generalization of isometric embedding, where the metric can be changed by a scaling factor s. If s is globally constant for all x, then the intrinsic geometry of the codebook manifold (v(x),∀x) remains Euclidean, i.e., flat. Why isotropic scaling and conformal embedding? The neurons are intrinsically noisy. During path integration, the errors may accumulate in v. Moreover, when inferring self-position from visual image, it is possible that v is inferred first with error, and then x is decoded from the inferred v. Due to isotropic scaling and conformal embedding, locally we have ‖δv‖= s‖δx‖, which guarantees that the `2 error in v translates proportionally to the `2 error in x, so that there will not be adversarial perturbations in v(x) that cause excessively big errors in x. Specifically, we have the following theorem. Theorem 2. Assume the general transformation model (Eq. (4)) and the isotropic scaling condition. For any fixed x, let s = ‖ fθ (v(x))‖, which is independent of θ . Suppose the neurons are noisy: v = v(x)+ ε , where ε ∼N (0,τ2Id) and d is the dimensionality of v. Suppose the agent infers its 2D position x̂ from v by x̂ = argminx′ ‖v−v(x′)‖2, i.e., v(x̂) is the projection of v onto the 2D manifold formed by (v(x′),∀x′). Then we have E‖x̂−x‖2 = 2τ2/s2. (7) See Supplementary for a proof. Connection to continuous attractor neural network (CANN) defined on 2D torus. The group representation condition and the isotropic scaling condition appear to be satisfied by the CANN models [2, 6, 7, 29, 1] that are typically hand-designed on a 2D torus. See Supplementary for details. 3 Linear transformation After studying the general transformation, we now investigate the linear transformation of v(x), for the following reasons. (1) It is the simplest transformation for which we can derive explicit algebraic and geometric results. (2) It enables us to connect the isotropic scaling condition to a special class of hexagon grid patterns. (3) In Section 4, we integrate it with the basis expansion model, which is also linear in v(x), via unitary group representation theory. For finite (non-infinitesimal) self-motion, the linear transformation model is: v(x+∆x) = F(v(x),∆x) =M(∆x)v(x), (8) where M(∆x) is a matrix. The group representation condition becomes M(∆x1 +∆x2)v(x) = M(∆x2)M(∆x1)v(x), i.e., M(∆x) is a matrix representation of self-motion ∆x, and M(∆x) acts on the coding manifold (v(x),∀x)). For egocentric parametrization of self-motion (∆r,θ), we can further write M(∆x) =Mθ (∆r) for ∆x = (∆r cosθ ,∆r sinθ), and the linear model becomes v(x+∆x) = F(v(x),∆r,θ) =Mθ (∆r)v(x). 3.1 Algebraic structure: matrix Lie algebra and Lie group For the linear model (Eq. (8)), the directional derivative is: fθ (v(x)) = F ′(v(x),0,θ) = M ′θ (0)v(x) =B(θ)v(x), where B(θ) =M ′ θ (0), which is the derivative of Mθ (∆r) with respect to ∆r at 0. For infinitesimal self-motion, the transformation model in Eq. (5) becomes v(x+δx) = (I+B(θ)δ r)v(x)+o(δ r), (9) where I is the identity matrix. It can be considered a linear recurrent network where B(θ) is the learnable weight matrix. We have the following theorem for the algebraic structure of the linear transformation. Theorem 3. Assume the linear transformation model so that for infinitesimal self-motion (δ r,θ), the model is in the form of Eq. (9), then for finite displacement ∆r, v(x+∆x) =Mθ (∆r)v(x) = exp(B(θ)∆r)v(x). (10) Proof. We can divide ∆r into N steps, so that δ r = ∆r/N→ 0 as N→ ∞, and v(x+∆x) = (I+B(θ)(∆r/N)+o(1/N))Nv(x)→ exp(B(θ)∆r)v(x) (11) as N→ ∞. The matrix exponential map is defined by exp(A) = ∑∞n=0 An/n!. The above math underlies the relationship between matrix Lie algebra and matrix Lie group in general [38]. For a fixed θ , the set of Mθ (∆r) = exp(B(θ)∆r) for ∆r ∈ R forms a matrix Lie group, which is both a group and a manifold. The tangent space of Mθ (∆r) at identity I is called matrix Lie algebra. B(θ) is the basis of this tangent space, and is often referred to as the generator matrix. Path integration. If the agent starts from x0, and make a sequence of moves ((∆rt ,θt), t = 1, ...,T ), then the vector representation of self-position is updated by vt = exp(B(θt)∆rt)vt−1, (12) where v0 = v(x0), and t = 1, ...,T . Approximation to exponential map. For a finite but small ∆r, exp(B(θ)∆r) can be approximated by a second-order (or higher-order) Taylor expansion exp(B(θ)∆r) = I+B(θ)∆r+B(θ)2∆r2/2+o(∆r2). (13) 3.2 Geometric structure: rotation, periodicity, metic and error correction If we assume B(θ) = −B(θ)>, i.e., skew-symmetric, then I +B(θ)δ r in Eq. (9) is a rotation matrix operating on v(x), due to the fact that (I+B(θ)δ r)(I+B(θ)δ r)> = I+O(δ r2). For finite ∆r, exp(B(θ)∆r) is also a rotation matrix, as it equals to the product of N matrices I+B(θ)(∆r/N) (Eq. (11)). The geometric interpretation is that, if the agent moves along the direction θ in the physical space, the vector v(x) is rotated by the matrix B(θ) in the neural space, while the `2 norm ‖v(x)‖2 remains fixed. We may interpret ‖v(x)‖2 = ∑di=1 vi(x)2 as the total energy of grid cells. See Fig. 1(b). The angle of rotation is given by ‖B(θ)v(x)‖δ r/‖v(x)‖, because ‖B(θ)v(x)‖δ r is the arc length and ‖v(x)‖ is the radius. If we further assume the isotropic scaling condition, which becomes that ‖ fθ (v(x))‖= ‖B(θ)v(x)‖ is constant over θ for the linear model, then the angle of rotation can be written as µδ r, where µ = ‖B(θ)v(x)‖/‖v(x)‖ is independent of θ . Geometrically, µ tells us how fast the vector rotates in the neural space as the agent moves in the physical space. In practice, µ can be much bigger than 1 for the learned model, thus the vector can rotate back to itself in a short distance, causing the periodic patterns in the elements of v(x). µ captures the notion of metric. For µ 1, the conformal embedding in Fig. 1 (b) magnifies the local motion in Fig. 1 (a), and this enables error correction [34]. More specifically, we have the following result, which is based on Theorem 2. Proposition 1. Assume the linear transformation model (Eq. (9)) and the isotropic scaling condition 2. For any fixed x, let µ = ‖B(θ)v(x)‖/‖v(x)‖. Suppose v= v(x)+ε , where ε ∼N (0,τ2Id) and τ2 = α2(‖v(x)‖2/d), so that α2 measures the variance of noise relative to the average magnitude of (vi(x)2, i= 1, ...,d). Suppose the agent infers its 2D position x̂ from v by x̂= argminx′ ‖v−v(x′)‖2. Then we have E‖x̂−x‖2 = 2α2/(µ2d). (14) See Supplementary for a proof. By the above proposition, error correction of grid cells is due to two factors: (1) higher dimensionality d of v(x) for encoding 2D positions x, and (2) a magnifying µ 1 (our analysis is local for any fixed x, and µ may depend on x). 3.3 Hexagon grid patterns formed by mixing Fourier waves In this subsection, we make connection between the isotropic scaling condition 2 and a special class of hexagon grid patterns created by linearly mixing three Fourier plane waves whose directions are 2π/3 apart. We show such linear mixing satisfies the linear transformation model and the isotropic scaling condition. Theorem 4. Let e(x) = (exp(i〈a j,x〉), j = 1,2,3)>, where (a j, j = 1,2,3) are three 2D vectors of equal norm, and the angle between every pair of them is 2π/3. Let v(x) =Ue(x), where U is an arbitrary unitary matrix. Let B(θ) =U ∗D(θ)U , where D(θ) = diag(i〈a j,q(θ)〉, j = 1,2,3), with q(θ) = (cosθ ,sinθ)>. Then (v(x),B(θ)) satisfies the linear transformation model (Eq. (9)) and the isotropic scaling condition 2. Moreover, B(θ) is skew-symmetric. See Supplementary for a proof. We would like to emphasize that the above theorem analyzes a special case solution to our linear transformation model, but our optimization-based learning method does not assume any superposition of Fourier basis functions as in the theorem. Our experimental results are learned purely by optimizing a loss function based on the simple assumptions of our model with generic vectors and matrices. We leave it to future work to theoretically prove that the isotropic scaling condition leads to hexagon grid patterns in either the general transformation model or the linear transformation model. The hexagon grid patterns are not limited to superpositions of three plane waves as in the above theorem. 3.4 Modules Biologically, it is well established that grid cells are organized in discrete modules [4, 37] or blocks. We thus partition the vector v(x) into K blocks, v(x) = (vk(x),k = 1, ...,K). Correspondingly the generator matrices B(θ) = diag(Bk(θ),k = 1, ...,K) are block diagonal, so that each sub-vector vk(x) is rotated by a sub-matrix Bk(θ). For the general transformation model, each sub-vector is transformed by a separate sub-network. By the same argument as in Section 3.2, let µk = ‖Bkvk(x)‖/‖vk(x)‖, then µk is the metric of module k. 4 Interaction with place cells 4.1 Place cells For each v(x), we need to uniquely decode x globally. This can be accomplished via interaction with place cells. Specifically, each place cell fires when the agent is at a specific position. Let A(x,x′) be the response map of the place cell associated with position x′. It measures the adjacency between x and x′. A commonly used form of A(x,x′) is the Gaussian adjacency kernel A(x,x′) = exp(−‖x−x′‖2/(2σ2)). The set of Gaussian adjacency kernels serve as inputs to our optimizationbased method to learn grid cells. 4.2 Basis expansion A popular model that connects place cells and grid cells is the following basis expansion model (or PCA-based model) [13]: A(x,x′) = 〈v(x),u(x′)〉= d ∑ i=1 ui,x′vi(x), (15) where v(x) = (vi(x), i = 1, ...,d)>, and u(x′) = (ui,x′ , i = 1, ...,d)>. Here (vi(x), i = 1, ...,d) forms a set of d basis functions (which are functions of x) for expanding A(x,x′) (which is a function of x for each place x′), while u(x′) is the read-out weight vector for place cell at x′ and needs to be learned. See Fig. 2 for an illustration. Experimental results on biological brains have shown that the connections from grid cells to place cells are excitatory [42, 31]. We thus assume that ui,x′ ≥ 0 for all i and x′. 4.3 From group representation to basis functions The vector representation v(x) generated (or constrained) by the linear transformation model (Eq. (8)) can serve as basis functions of the PCA-based basis expansion model (Eq. (15)), due to the fundamental theorems of Schur [41] and Peter-Weyl [38], which reveal the deep root of Fourier analysis and generalize it to general Lie groups. Specifically, if M(∆x) is an irreducible unitary representation of ∆x that forms a compact Lie group, then the elements {Mi j(∆x)} form a set of orthogonal basis functions of ∆x. Let v(x) = M(x)v(0) (where we choose the origin 0 as the reference point). The elements of v(x), i.e., (vi(x), i = 1, ...,d), are linear mixings of the basis functions {Mi j(x)}, so that they themselves form a new set of basis functions that serve to expand (A(x,x′),∀x′) that parametrizes the place cells. Thus group representation in our path integration model is a perfect match to the basis expansion model, in the sense that the basis functions are results of group representation. The basis expansion model (or PCA-based model) (Eq. (15)) assumes that the basis functions are orthogonal, whereas in our work, we do not make the orthogonality assumption. Interestingly, the learned transformation model generates basis functions that are close to being orthogonal automatically. See Supplementary for more detailed explanation and experimental results. 4.4 Decoding and re-encoding For a neural response vector v, such as vt in Eq. (12), the response of the place cell associated with location x′ is 〈v,u(x′)〉. We can decode the position x̂ by examining which place cell has the maximal response, i.e., x̂= argmax x′ 〈v,u(x′)〉. (16) After decoding x̂, we can re-encode v← v(x̂) for error correction. Decoding and re-encoding can also be done by directly projecting v onto the manifold (v(x),∀x), which gives similar results. See Supplementary for more analysis and experimental results. 5 Learning We learn the model by optimizing a loss function defined based on three model assumptions discussed above: (1) the basis expansion model (Eq. (15)), (2) the linear transformation model (Eq. (10)) and (3) the isotropic scaling condition 2. The input is the set of adjacency kernels A(x,x′),∀x,x′. The unknown parameters to be learned are (1) (v(x) = (vk(x),k = 1, ...,K),∀x), (2) (u(x′),∀x′) and (3) (B(θ),∀θ). We assume that there are K modules or blocks and B(θ) is skew-symmetric, so that B(θ) are parametrized as block-diagonal matrices (Bk(θ),k = 1, ...,K),∀θ) and only the lower triangle parts of the matrices need to be learned. The loss function is defined as a weighted sum of simple `2 loss terms constraining the three model assumptions: L = L0 +λ1L1 +λ2L2, where L0 = Ex,x′ [A(x,x′)−〈v(x),u(x′)〉]2, (basis expansion) (17) L1 = K ∑ k=1 Ex,∆x‖vk(x+∆x)− exp(Bk(θ)∆r)vk(x)‖2, (transformation) (18) L2 = K ∑ k=1 Ex,θ ,∆θ [‖Bk(θ +∆θ)vk(x)‖−‖Bk(θ)vk(x)‖]2. (isotropic scaling) (19) In L1, ∆x = (∆r cosθ ,∆r sinθ). λ1 and λ2 are chosen so that the three loss terms are of similar magnitudes. A(x,x′) are given as Gaussian adjacency kernels. For regularization, we add a penalty on ‖u(x′)‖2, and further assume u(x′)≥ 0 so that the connections from grid cells to place cells are excitatory [42, 31]. However, note that u(x′)≥ 0 is not necessary for the emergence of hexagon grid patterns as shown in the ablation studies. Expectations in L0, L1 and L2 are approximated by Monte Carlo samples. L is minimized by Adam [25] optimizer. See Supplementary for implementation details. It is worth noting that, consistent with the experimental observations, we assume individual place field A(x,x′) to exhibit a Gaussian shape, rather than a Mexican-hat pattern (with balanced excitatory center and inhibitory surround) as assumed in previous basis expansion models [13, 33] of grid cells. ReLU non-linearity. We also experiment with a non-linear transformation model where a ReLU activation is added. See Supplementary for details. 6 Experiments We conduct numerical experiments to learn the representations as described in Section 5. Specifically, we use a square environment with size 1m × 1m, which is discretized into a 40× 40 lattice. For direction, we discretize the circle [0,2π] into 144 directions and use nearest neighbor linear interpolations for values in between. We use the second-order Taylor expansion (Eq. (13)) to approximate the exponential map exp(B(θ)∆r). The displacement ∆r are sampled within a small range, i.e., ∆r is smaller than 3 grids on the lattice. For A(x,x′), we use a Gaussian adjacency kernel with σ = 0.07. v(x) is of d = 192 dimensions, which is partitioned into K = 16 modules, each of which has 12 cells. 6.1 Hexagon grid patterns Fig. 3 shows the learned firing patterns of v(x) = (vi(x), i = 1, ...,d) over the 40×40 lattice of x. Every row shows the learned units belonging to the same block or module. Regular hexagon grid patterns emerge. Within each block or module, the scales and orientations are roughly the same, but with different phases or spatial shifts. For the learned B(θ), each element shows regular sine/cosine tuning over θ . See Supplementary for more learned patterns. We further investigate the characteristics of the learned firing patterns of v(x) using measures adopted from the literature of grid cells. Specifically, the hexagonal regularity, scale and orientation of grid-like patterns are quantified using the gridness score, grid scale and grid orientation [26, 32], which are determined by taking a circular sample of the autocorrelogram of the response map. Table 1 summarizes the results of gridness scores and comparisons with other optimization-based approaches [3, 33]. We apply the same threshold to determine whether a learned neuron can be considered a grid cell as in [3] (i.e., gridness score > 0.37). For our model, 73.10% of the learned neurons exhibit significant hexagonal periodicity in terms of the gridness score. Fig. 4 shows the histogram of grid scales of the learned grid cell neurons (mean 0.33, range 0.21 to 0.49), which follows a multi-modal distribution. The ratio between neighboring modes are roughly 1.52 and 1.51, which closely matches the theoretical predictions [39, 36] and also the empirical results from rodent grid cells [37]. Collectively, these results reveal striking, quantitative correspondence between the properties of our model neurons and those of the grid cells in the brain. Connection to continuous attractor neural network (CANN) defined on 2D torus. The fact that the learned response maps of each module are shifted versions of a common hexagon periodic pattern implies that the learned codebook manifold forms a 2D torus, and as the agent moves, the responses of the grid cells undergo a cyclic permutation. This is consistent with the CANN models hand-crafted on 2D torus. See Supplementary for a detailed discussion. Ablation studies. We conduct ablation studies to examine whether certain model assumptions are empirically important for the emergence of hexagon grid patterns. The conclusions are highlighted as follows: (1) The loss term L2 (Eq. (19)) constraining the isotropic scaling condition is necessary for learning hexagon grid patterns. (2) The constraint u(x′)≥ 0 is not necessary for learning hexagon patterns, but the activations can be either excitatory or inhibitory without the constraint. (3) The skew-symmetric assumption on B(θ) is not important for learning hexagon grid pattern. (4) Hexagon patterns always emerge regardless of the choice of block size and number of blocks. (5) Multiple blocks or modules are necessary for the emergence of hexagon grid patterns of multiple scales. See Fig. 5 for several learned patterns and Supplementary for the full studies. 6.2 Path integration We then examine the ability of the learned model on performing multi-step path integration, which can be accomplished by recurrently updating vt (Eq. (12)) and decoding vt to xt for t = 1, ...,T (Eq. (16)). Re-encoding vt ← v(xt) after decoding is adopted. Fig. 6(a) shows an example trajectory of accurate path integration for number of time steps T = 30. As shown in Fig. 6(b), with re-encoding, the path integration error remains close to zero over a duration of 500 time steps (< 0.01 cm, averaged over 1,000 episodes), even if the model is trained with the single-time-step transformation model (Eq. (18)). Without re-encoding, the error goes slight higher but still remains small (ranging from 0.0 to 4.2 cm, mean 1.9 cm in the 1m × 1m environment). Fig. 6(c) summarizes the path integration performance by fixing the number of blocks and altering the block size. The performance of path integration would be improved as the block size becomes larger, i.e., with more neurons in each module. When block size is larger than 16, path integration is very accurate for the time steps tested. Error correction. See Supplementary for numerical experiments on error correction, which show that the learn model is still capable of path integration when we apply Gaussian white noise errors or Bernoulli drop-out errors to vt . 6.3 Additional experiments on path planning and egocentric vision We also conduct additional experiments on path planning and egocentric vision with our model. Path planning can be accomplished by steepest ascent on the adjacency to the target position. For egocentric vision, we learn an extra generator network that generates the visual image given the position encoding formed by the grid cells. See Supplementary for details. 7 Related work Our work is related to several lines of previous research on modeling grid cells. First, RNN models have been used to model grid cells and path integration. The traditional approach uses simulation-based models with hand-crafted connectivity, known as continuous attractor neural network (CANN) [2, 6, 7, 29, 1]. On the other hand, more recently two pioneering papers [9, 3] developed optimization-based RNN approaches to learn the path integration model and discovered that grid-like response patterns can emerge in the optimized networks. These results are further substantiated in [33, 8]. Our work analyzes the properties of the general recurrent model for path integration, and these properties seem to be satisfied by the hand-crafted CANN models. Our method belongs to the scheme of optimization-based approaches, and the learned response maps share similar properties as assumed by the CANN models. Second, our work differs from the PCA-based basis expansion models [13, 33, 35] in that, unlike PCA, we make no assumption about the orthogonality between the basis functions, and the basis functions are generated by the transformation model. Furthermore, in previous basis expansion models [13, 33], place fields with Mexican-hat patterns (with balanced excitatory center and inhibitory surround) had to be assumed in order to obtain hexagonal grid firing patterns. However, experimentally measured place fields in biological brains were instead well characterized by Gaussian functions. Crucially, in our model, hexagonal grids emerge from learning with Gaussian place fields, and there is no need to assume any additional surround mechanisms or difference of Gaussians kernels. In another related paper, [19] proposed matrix representation of 2D self-motion, while our work analyzes general transformations. Our investigation of the special case of linear transformation model reveals the matrix Lie group and the matrix Lie algebra of rotation group. Our work also connects the linear transformation model to the basis expansion model via unitary group representation theory. 8 Conclusion This paper analyzes the recurrent model for path integration calculations by grid cells. We identify a group representation condition and an isotropic scaling condition that give rise to locally conformal embedding of the self-motion. We study a linear prototype model that reveals the matrix Lie group of rotation, and explore the connection between the isotropic scaling condition and hexagon grid patterns. In addition to these theoretical investigations, our numerical experiments demonstrate that our model can learn hexagon grid patterns for the response maps of grid cells, and the learned model is capable of accurate long distance path integration. In this work, the numerical experiments are mostly limited to the linear transformation model, with the exception of an experiment with ReLU non-linearity. We will conduct experiments on the other non-linear transformation models, especially the forms assumed by the hand-crafted continuous attractor neural networks. Moreover, we assume that the agent navigates within a square open-field environment without obstacles or rewards. It is worthwhile to explore more complicated environments, including 3D environment. Acknowledgments and Disclosure of Funding The work was supported by NSF DMS-2015577, ONR MURI project N00014-16-1-2007, DARPA XAI project N66001-17-2-4029, and XSEDE grant ASC170063. We thank Yaxuan Zhu from UCLA Department of Statistics for his help with experiments on egocentric vision. We thank Dr. Wenhao Zhang for sharing his knowledge and insights on continuous attractor neural networks. We thank Sirui Xie for discussions. We thank the three reviewers for their constructive comments.
1. What is the main contribution of the paper in understanding grid cells? 2. What makes the paper's approach unique compared to other works on grid cell emergence? 3. How does the paper demonstrate the robustness of its proposed model to noise? 4. In what ways does the paper provide insights into the emergence of grid cells in the brain? 5. How does the reviewer assess the significance and impact of the paper's findings on the field of neuroscience?
Summary Of The Paper Review
Summary Of The Paper The paper derives a model of grid cells from first principles, starting with the basic assumption that position is represented as a high-D vector and that it is updated as a function of direction and speed. The authors stipulate an isotropic scaling condition in order to make the representation robust to neural noise. The elements of the resulting high-D embedding resemble grid cells, and importantly, the ability for error correction during path integration. Review I find this to be a super interesting and exciting paper. There are by now many papers that claim to show how grid cells emerge as a result of some kind of learning rule, but in my view this is the most insightful analysis I've seen to date. It is a sensible and mathematically grounded approach. Although they show how the grid cells emerge from learning, the resulting solution corroborates the theory - and so we understand why this is happening. An important part of the story is robustness to noise, which is clearly linked to the isotropic condition and hence hexagonal grids. This is high impact work that I believe will of interest to both the NeurIPS and computational neuroscience communities.
NIPS
Title On Path Integration of Grid Cells: Group Representation and Isotropic Scaling Abstract Understanding how grid cells perform path integration calculations remains a fundamental problem. In this paper, we conduct theoretical analysis of a general representation model of path integration by grid cells, where the 2D self-position is encoded as a higher dimensional vector, and the 2D self-motion is represented by a general transformation of the vector. We identify two conditions on the transformation. One is a group representation condition that is necessary for path integration. The other is an isotropic scaling condition that ensures locally conformal embedding, so that the error in the vector representation translates conformally to the error in the 2D self-position. Then we investigate the simplest transformation, i.e., the linear transformation, uncover its explicit algebraic and geometric structure as matrix Lie group of rotation, and explore the connection between the isotropic scaling condition and a special class of hexagon grid patterns. Finally, with our optimization-based approach, we manage to learn hexagon grid patterns that share similar properties of the grid cells in the rodent brain. The learned model is capable of accurate long distance path integration. Code is available at https://github.com/ruiqigao/grid-cell-path. 1 Introduction Imagine walking in the darkness. Purely based on the sense of self-motion, one can gain a sense of self-position by integrating the self motion - a process often referred to as path integration [10, 14, 21, 15, 27]. While the exact neural underpinning of path integration remains unclear, it has been hypothesized that the grid cells [21, 17, 40, 24, 23, 12] in the mammalian medial entorhinal cortex (mEC) may be involved in this process [20, 30, 22]. The grid cells are so named because individual neurons exhibit striking firing patterns that form hexagonal grids when the agent (such as a rat) navigates in a 2D open field [18, 21, 16, 6, 34, 5, 7, 11, 29, 1]. The grid cells also interact with the place cells in the hippocampus [28]. Unlike a grid cell that fires at the vertices of a lattice, a place cell often fires at a single (or a few) locations. The purpose of this paper is to understand how the grid cells may perform path integration calculations. We study a general optimization-based representational model in which the 2D self-position is ∗The author is now a Research Scientist at Google Brain team. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). represented by a higher dimensional vector and the 2D self-motion is represented by a transformation of the vector. The vector representation can be considered position encoding or position embedding, where the elements of the vector may be interpreted as activities of a population of grid cells. The transformation can be realized by a recurrent network that acts on the vector. Our focus is to study the properties of the transformation. Specifically, we identify two conditions for the transformation: a group representation condition and an isotropic scaling condition, under which we demonstrate that the local neighborhood around each self-position in the 2D physical space is embedded conformally as a 2D neighborhood around the vector representation of the self-position in the neural space. We then investigate the simplest special case of the transformation, i.e., linear transformation, that forms a matrix Lie group of rotation, under which case we show that the isotropic scaling condition is connected to a special class of hexagonal grid patterns. Our numerical experiments demonstrate that our model learns clear hexagon grid patterns of multiple scales which share observed properties of the grid cells in the rodent brain, by optimizing a simple loss function. The learned model is also capable of accurate long distance path integration. Contributions. Our work contributes to understanding the grid cells from the perspective of representation learning. We conduct theoretical analysis of (1) general transformation for path integration by identifying two key conditions and a local conformal embedding property, (2) linear transformation by revealing the algebraic and geometric structure and connecting the isotropic scaling condition and a special class of hexagon grid patterns, and (3) integration of linear transformation model and linear basis expansion model via unitary group representation theory. Experimentally we learn clear hexagon grid patterns that are consistent with biological observations, and the learned model is capable of accurate path integration. 2 General transformation 2.1 Position embedding Consider an agent (e.g., a rat) navigating within a 2D open field. Let x = (x1,x2) be the selfposition of the agent. We assume that the selfposition x in the 2D physical space is represented by the response activities of a population of d neurons (e.g., d = 200), which form a vector v(x) = (vi(x), i = 1, ...,d)> in the ddimensional “neural space”, with each element vi(x) representing the firing rate of one neuron when the animal is at location x. v(x) can be called position encoding or position embedding. Collectively, (v(x),∀x) forms a codebook of x ∈ R2, and (v(x),∀x) is a 2D manifold in the d-dimensional neural space, i.e., globally we embed R2 as a 2D manifold in the neural space. Locally, we identify two condi- tions under which the 2D local neighborhood around each x is embedded conformally as a 2D neighborhood around v(x) with a scaling factor. See Fig. 1. As shown in Section 3.3, the conformal embedding is connected to the hexagon grid patterns. 2.2 Transformation and path integration At self-position x, if the agent makes a self-motion ∆x = (∆x1,∆x2), then it moves to x+∆x. Correspondingly, the vector representation v(x) is transformed to v(x+∆x). The general form of the transformation can be formulated as: v(x+∆x) = F(v(x),∆x). (1) The transformation F(·,∆x) can be considered a representation of ∆x, which forms a 2D additive group. We call Eq. (1) the transformation model. It can be implemented by a recurrent network to derive a path integration model: if the agent starts from x0, and makes a sequence of moves (∆xt , t = 1, ...,T ), then the vector is updated by vt = F(vt−1,∆xt), where v0 = v(x0), and t = 1, ...,T . 2.3 Group representation condition The solution to the transformation model (Eq. (1)) should satisfy the following condition. Condition 1. (Group representation condition) (v(x),∀x) and (F(·,∆x),∀∆x) form a representation of the 2D additive Euclidean group R2 in the sense that F(v(x),0) = v(x), ∀x; (2) F(v(x),∆x1 +∆x2) = F(F(v(x),∆x1),∆x2), ∀x,∆x1,∆x2. (3) (F(·,∆x),∀∆x) is a Lie group of transformations acting on the codebook manifold (v(x),∀x). The reason for (2) is that if ∆x = 0, then F(·,0) should be the identity transformation. Thus the codebook manifold (v(x),∀x) consists of fixed points of the transformation F(·,0). If F(·,0) is furthermore a contraction around (v(x),∀x), then (v(x),∀x) are the attractor points. The reason for (3) is that the agent can move in one step by ∆x1 +∆x2, or first move by ∆x1, and then move by ∆x2. Both paths would end up at the same x+∆x1 +∆x2, which is represented by the same v(x+∆x1 +∆x2). The group representation condition is a necessary self-consistent condition for the transformation model (Eq. (1)). 2.4 Egocentric self-motion Self-motion ∆x can also be parametrized egocentrically as (∆r,θ), where ∆r is the displacement along the direction θ ∈ [0,2π], so that ∆x= (∆x1 = ∆r cosθ ,∆x2 = ∆r sinθ). The egocentric self-motion may be more biologically plausible where θ is encoded by head direction, and ∆r can be interpreted as the speed along direction θ . The transformation model then becomes v(x+∆x) = F(v(x),∆r,θ), (4) where we continue to use F(·) for the transformation (with slight abuse of notation). (∆r,θ) form a polar coordinate system around x. 2.5 Infinitesimal self-motion and directional derivative In this subsection, we derive the transformation model for infinitesimal self-motion. While we use ∆x or ∆r to denote finite (non-infinitesimal) self-motion, we use δx or δ r to denote infinitesimal self-motion. At self-position x, for an infinitesimal displacement δ r along direction θ , δx= (δx1 = δ r cosθ ,δx2 = δ r sinθ). See Fig. 1 (a) for an illustration. Given that δ r is infinitesimal, for any fixed θ , a first order Taylor expansion of F(v(x),δ r,θ) with respect to δ r gives us v(x+δx) = F(v(x),δ r,θ) = F(v(x),0,θ)+F ′(v(x),0,θ)δ r+o(δ r) = v(x)+ fθ (v(x))δ r+o(δ r), (5) where F(v(x),0,θ) = v(x) according to Condition 1, and fθ (v(x)) := F ′(v(x),0,θ) is the first derivative of F(v(x),∆r,θ) with respect to ∆r at ∆r = 0. fθ (v(x)) is the directional derivative of F(·) at self-position x and direction θ . For a fixed θ , (F(·,∆r,θ),∀∆r) forms a one-parameter Lie group of transformations, and fθ (·) is the generator of its Lie algebra. 2.6 Isotropic scaling condition With the directional derivative, we define the second condition as follows, which leads to locally conformal embedding and is connected to hexagon grid pattern. Condition 2. (Isotropic scaling condition) For any fixed x, ‖ fθ (v(x))‖ is constant over θ . Let f0(v(x)) denote fθ (v(x)) for θ = 0, and fπ/2(v(x)) denote fθ (v(x)) for θ = π/2. Then we have the following theorem: Theorem 1. Assume group representation condition 1 and isotropic scaling condition 2. At any fixed x, for the local motion δx= (δ r cosθ ,δ r sinθ) around x, let δv = v(x+δx)−v(x) be the change of vector and s = ‖ fθ (v(x))‖, then we have ‖δv‖= s‖δx‖. Moreover, δv = fθ (v(x))δ r+o(δ r) = f0(v(x))δ r cosθ + fπ/2(v(x))δ r sinθ +o(δ r), (6) where f0(v(x)) and fπ/2(v(x)) are two orthogonal basis vectors of equal norm s. See Supplementary for a proof and Fig. 1(b) for an illustration. Theorem 1 indicates that the local 2D polar system around self-position x in the 2D physical space is embedded conformally as a 2D polar system around vector v(x) in the d-dimensional neural space, with a scaling factor s (our analysis is local for any fixed x, and s may depend on x). Conformal embedding is a generalization of isometric embedding, where the metric can be changed by a scaling factor s. If s is globally constant for all x, then the intrinsic geometry of the codebook manifold (v(x),∀x) remains Euclidean, i.e., flat. Why isotropic scaling and conformal embedding? The neurons are intrinsically noisy. During path integration, the errors may accumulate in v. Moreover, when inferring self-position from visual image, it is possible that v is inferred first with error, and then x is decoded from the inferred v. Due to isotropic scaling and conformal embedding, locally we have ‖δv‖= s‖δx‖, which guarantees that the `2 error in v translates proportionally to the `2 error in x, so that there will not be adversarial perturbations in v(x) that cause excessively big errors in x. Specifically, we have the following theorem. Theorem 2. Assume the general transformation model (Eq. (4)) and the isotropic scaling condition. For any fixed x, let s = ‖ fθ (v(x))‖, which is independent of θ . Suppose the neurons are noisy: v = v(x)+ ε , where ε ∼N (0,τ2Id) and d is the dimensionality of v. Suppose the agent infers its 2D position x̂ from v by x̂ = argminx′ ‖v−v(x′)‖2, i.e., v(x̂) is the projection of v onto the 2D manifold formed by (v(x′),∀x′). Then we have E‖x̂−x‖2 = 2τ2/s2. (7) See Supplementary for a proof. Connection to continuous attractor neural network (CANN) defined on 2D torus. The group representation condition and the isotropic scaling condition appear to be satisfied by the CANN models [2, 6, 7, 29, 1] that are typically hand-designed on a 2D torus. See Supplementary for details. 3 Linear transformation After studying the general transformation, we now investigate the linear transformation of v(x), for the following reasons. (1) It is the simplest transformation for which we can derive explicit algebraic and geometric results. (2) It enables us to connect the isotropic scaling condition to a special class of hexagon grid patterns. (3) In Section 4, we integrate it with the basis expansion model, which is also linear in v(x), via unitary group representation theory. For finite (non-infinitesimal) self-motion, the linear transformation model is: v(x+∆x) = F(v(x),∆x) =M(∆x)v(x), (8) where M(∆x) is a matrix. The group representation condition becomes M(∆x1 +∆x2)v(x) = M(∆x2)M(∆x1)v(x), i.e., M(∆x) is a matrix representation of self-motion ∆x, and M(∆x) acts on the coding manifold (v(x),∀x)). For egocentric parametrization of self-motion (∆r,θ), we can further write M(∆x) =Mθ (∆r) for ∆x = (∆r cosθ ,∆r sinθ), and the linear model becomes v(x+∆x) = F(v(x),∆r,θ) =Mθ (∆r)v(x). 3.1 Algebraic structure: matrix Lie algebra and Lie group For the linear model (Eq. (8)), the directional derivative is: fθ (v(x)) = F ′(v(x),0,θ) = M ′θ (0)v(x) =B(θ)v(x), where B(θ) =M ′ θ (0), which is the derivative of Mθ (∆r) with respect to ∆r at 0. For infinitesimal self-motion, the transformation model in Eq. (5) becomes v(x+δx) = (I+B(θ)δ r)v(x)+o(δ r), (9) where I is the identity matrix. It can be considered a linear recurrent network where B(θ) is the learnable weight matrix. We have the following theorem for the algebraic structure of the linear transformation. Theorem 3. Assume the linear transformation model so that for infinitesimal self-motion (δ r,θ), the model is in the form of Eq. (9), then for finite displacement ∆r, v(x+∆x) =Mθ (∆r)v(x) = exp(B(θ)∆r)v(x). (10) Proof. We can divide ∆r into N steps, so that δ r = ∆r/N→ 0 as N→ ∞, and v(x+∆x) = (I+B(θ)(∆r/N)+o(1/N))Nv(x)→ exp(B(θ)∆r)v(x) (11) as N→ ∞. The matrix exponential map is defined by exp(A) = ∑∞n=0 An/n!. The above math underlies the relationship between matrix Lie algebra and matrix Lie group in general [38]. For a fixed θ , the set of Mθ (∆r) = exp(B(θ)∆r) for ∆r ∈ R forms a matrix Lie group, which is both a group and a manifold. The tangent space of Mθ (∆r) at identity I is called matrix Lie algebra. B(θ) is the basis of this tangent space, and is often referred to as the generator matrix. Path integration. If the agent starts from x0, and make a sequence of moves ((∆rt ,θt), t = 1, ...,T ), then the vector representation of self-position is updated by vt = exp(B(θt)∆rt)vt−1, (12) where v0 = v(x0), and t = 1, ...,T . Approximation to exponential map. For a finite but small ∆r, exp(B(θ)∆r) can be approximated by a second-order (or higher-order) Taylor expansion exp(B(θ)∆r) = I+B(θ)∆r+B(θ)2∆r2/2+o(∆r2). (13) 3.2 Geometric structure: rotation, periodicity, metic and error correction If we assume B(θ) = −B(θ)>, i.e., skew-symmetric, then I +B(θ)δ r in Eq. (9) is a rotation matrix operating on v(x), due to the fact that (I+B(θ)δ r)(I+B(θ)δ r)> = I+O(δ r2). For finite ∆r, exp(B(θ)∆r) is also a rotation matrix, as it equals to the product of N matrices I+B(θ)(∆r/N) (Eq. (11)). The geometric interpretation is that, if the agent moves along the direction θ in the physical space, the vector v(x) is rotated by the matrix B(θ) in the neural space, while the `2 norm ‖v(x)‖2 remains fixed. We may interpret ‖v(x)‖2 = ∑di=1 vi(x)2 as the total energy of grid cells. See Fig. 1(b). The angle of rotation is given by ‖B(θ)v(x)‖δ r/‖v(x)‖, because ‖B(θ)v(x)‖δ r is the arc length and ‖v(x)‖ is the radius. If we further assume the isotropic scaling condition, which becomes that ‖ fθ (v(x))‖= ‖B(θ)v(x)‖ is constant over θ for the linear model, then the angle of rotation can be written as µδ r, where µ = ‖B(θ)v(x)‖/‖v(x)‖ is independent of θ . Geometrically, µ tells us how fast the vector rotates in the neural space as the agent moves in the physical space. In practice, µ can be much bigger than 1 for the learned model, thus the vector can rotate back to itself in a short distance, causing the periodic patterns in the elements of v(x). µ captures the notion of metric. For µ 1, the conformal embedding in Fig. 1 (b) magnifies the local motion in Fig. 1 (a), and this enables error correction [34]. More specifically, we have the following result, which is based on Theorem 2. Proposition 1. Assume the linear transformation model (Eq. (9)) and the isotropic scaling condition 2. For any fixed x, let µ = ‖B(θ)v(x)‖/‖v(x)‖. Suppose v= v(x)+ε , where ε ∼N (0,τ2Id) and τ2 = α2(‖v(x)‖2/d), so that α2 measures the variance of noise relative to the average magnitude of (vi(x)2, i= 1, ...,d). Suppose the agent infers its 2D position x̂ from v by x̂= argminx′ ‖v−v(x′)‖2. Then we have E‖x̂−x‖2 = 2α2/(µ2d). (14) See Supplementary for a proof. By the above proposition, error correction of grid cells is due to two factors: (1) higher dimensionality d of v(x) for encoding 2D positions x, and (2) a magnifying µ 1 (our analysis is local for any fixed x, and µ may depend on x). 3.3 Hexagon grid patterns formed by mixing Fourier waves In this subsection, we make connection between the isotropic scaling condition 2 and a special class of hexagon grid patterns created by linearly mixing three Fourier plane waves whose directions are 2π/3 apart. We show such linear mixing satisfies the linear transformation model and the isotropic scaling condition. Theorem 4. Let e(x) = (exp(i〈a j,x〉), j = 1,2,3)>, where (a j, j = 1,2,3) are three 2D vectors of equal norm, and the angle between every pair of them is 2π/3. Let v(x) =Ue(x), where U is an arbitrary unitary matrix. Let B(θ) =U ∗D(θ)U , where D(θ) = diag(i〈a j,q(θ)〉, j = 1,2,3), with q(θ) = (cosθ ,sinθ)>. Then (v(x),B(θ)) satisfies the linear transformation model (Eq. (9)) and the isotropic scaling condition 2. Moreover, B(θ) is skew-symmetric. See Supplementary for a proof. We would like to emphasize that the above theorem analyzes a special case solution to our linear transformation model, but our optimization-based learning method does not assume any superposition of Fourier basis functions as in the theorem. Our experimental results are learned purely by optimizing a loss function based on the simple assumptions of our model with generic vectors and matrices. We leave it to future work to theoretically prove that the isotropic scaling condition leads to hexagon grid patterns in either the general transformation model or the linear transformation model. The hexagon grid patterns are not limited to superpositions of three plane waves as in the above theorem. 3.4 Modules Biologically, it is well established that grid cells are organized in discrete modules [4, 37] or blocks. We thus partition the vector v(x) into K blocks, v(x) = (vk(x),k = 1, ...,K). Correspondingly the generator matrices B(θ) = diag(Bk(θ),k = 1, ...,K) are block diagonal, so that each sub-vector vk(x) is rotated by a sub-matrix Bk(θ). For the general transformation model, each sub-vector is transformed by a separate sub-network. By the same argument as in Section 3.2, let µk = ‖Bkvk(x)‖/‖vk(x)‖, then µk is the metric of module k. 4 Interaction with place cells 4.1 Place cells For each v(x), we need to uniquely decode x globally. This can be accomplished via interaction with place cells. Specifically, each place cell fires when the agent is at a specific position. Let A(x,x′) be the response map of the place cell associated with position x′. It measures the adjacency between x and x′. A commonly used form of A(x,x′) is the Gaussian adjacency kernel A(x,x′) = exp(−‖x−x′‖2/(2σ2)). The set of Gaussian adjacency kernels serve as inputs to our optimizationbased method to learn grid cells. 4.2 Basis expansion A popular model that connects place cells and grid cells is the following basis expansion model (or PCA-based model) [13]: A(x,x′) = 〈v(x),u(x′)〉= d ∑ i=1 ui,x′vi(x), (15) where v(x) = (vi(x), i = 1, ...,d)>, and u(x′) = (ui,x′ , i = 1, ...,d)>. Here (vi(x), i = 1, ...,d) forms a set of d basis functions (which are functions of x) for expanding A(x,x′) (which is a function of x for each place x′), while u(x′) is the read-out weight vector for place cell at x′ and needs to be learned. See Fig. 2 for an illustration. Experimental results on biological brains have shown that the connections from grid cells to place cells are excitatory [42, 31]. We thus assume that ui,x′ ≥ 0 for all i and x′. 4.3 From group representation to basis functions The vector representation v(x) generated (or constrained) by the linear transformation model (Eq. (8)) can serve as basis functions of the PCA-based basis expansion model (Eq. (15)), due to the fundamental theorems of Schur [41] and Peter-Weyl [38], which reveal the deep root of Fourier analysis and generalize it to general Lie groups. Specifically, if M(∆x) is an irreducible unitary representation of ∆x that forms a compact Lie group, then the elements {Mi j(∆x)} form a set of orthogonal basis functions of ∆x. Let v(x) = M(x)v(0) (where we choose the origin 0 as the reference point). The elements of v(x), i.e., (vi(x), i = 1, ...,d), are linear mixings of the basis functions {Mi j(x)}, so that they themselves form a new set of basis functions that serve to expand (A(x,x′),∀x′) that parametrizes the place cells. Thus group representation in our path integration model is a perfect match to the basis expansion model, in the sense that the basis functions are results of group representation. The basis expansion model (or PCA-based model) (Eq. (15)) assumes that the basis functions are orthogonal, whereas in our work, we do not make the orthogonality assumption. Interestingly, the learned transformation model generates basis functions that are close to being orthogonal automatically. See Supplementary for more detailed explanation and experimental results. 4.4 Decoding and re-encoding For a neural response vector v, such as vt in Eq. (12), the response of the place cell associated with location x′ is 〈v,u(x′)〉. We can decode the position x̂ by examining which place cell has the maximal response, i.e., x̂= argmax x′ 〈v,u(x′)〉. (16) After decoding x̂, we can re-encode v← v(x̂) for error correction. Decoding and re-encoding can also be done by directly projecting v onto the manifold (v(x),∀x), which gives similar results. See Supplementary for more analysis and experimental results. 5 Learning We learn the model by optimizing a loss function defined based on three model assumptions discussed above: (1) the basis expansion model (Eq. (15)), (2) the linear transformation model (Eq. (10)) and (3) the isotropic scaling condition 2. The input is the set of adjacency kernels A(x,x′),∀x,x′. The unknown parameters to be learned are (1) (v(x) = (vk(x),k = 1, ...,K),∀x), (2) (u(x′),∀x′) and (3) (B(θ),∀θ). We assume that there are K modules or blocks and B(θ) is skew-symmetric, so that B(θ) are parametrized as block-diagonal matrices (Bk(θ),k = 1, ...,K),∀θ) and only the lower triangle parts of the matrices need to be learned. The loss function is defined as a weighted sum of simple `2 loss terms constraining the three model assumptions: L = L0 +λ1L1 +λ2L2, where L0 = Ex,x′ [A(x,x′)−〈v(x),u(x′)〉]2, (basis expansion) (17) L1 = K ∑ k=1 Ex,∆x‖vk(x+∆x)− exp(Bk(θ)∆r)vk(x)‖2, (transformation) (18) L2 = K ∑ k=1 Ex,θ ,∆θ [‖Bk(θ +∆θ)vk(x)‖−‖Bk(θ)vk(x)‖]2. (isotropic scaling) (19) In L1, ∆x = (∆r cosθ ,∆r sinθ). λ1 and λ2 are chosen so that the three loss terms are of similar magnitudes. A(x,x′) are given as Gaussian adjacency kernels. For regularization, we add a penalty on ‖u(x′)‖2, and further assume u(x′)≥ 0 so that the connections from grid cells to place cells are excitatory [42, 31]. However, note that u(x′)≥ 0 is not necessary for the emergence of hexagon grid patterns as shown in the ablation studies. Expectations in L0, L1 and L2 are approximated by Monte Carlo samples. L is minimized by Adam [25] optimizer. See Supplementary for implementation details. It is worth noting that, consistent with the experimental observations, we assume individual place field A(x,x′) to exhibit a Gaussian shape, rather than a Mexican-hat pattern (with balanced excitatory center and inhibitory surround) as assumed in previous basis expansion models [13, 33] of grid cells. ReLU non-linearity. We also experiment with a non-linear transformation model where a ReLU activation is added. See Supplementary for details. 6 Experiments We conduct numerical experiments to learn the representations as described in Section 5. Specifically, we use a square environment with size 1m × 1m, which is discretized into a 40× 40 lattice. For direction, we discretize the circle [0,2π] into 144 directions and use nearest neighbor linear interpolations for values in between. We use the second-order Taylor expansion (Eq. (13)) to approximate the exponential map exp(B(θ)∆r). The displacement ∆r are sampled within a small range, i.e., ∆r is smaller than 3 grids on the lattice. For A(x,x′), we use a Gaussian adjacency kernel with σ = 0.07. v(x) is of d = 192 dimensions, which is partitioned into K = 16 modules, each of which has 12 cells. 6.1 Hexagon grid patterns Fig. 3 shows the learned firing patterns of v(x) = (vi(x), i = 1, ...,d) over the 40×40 lattice of x. Every row shows the learned units belonging to the same block or module. Regular hexagon grid patterns emerge. Within each block or module, the scales and orientations are roughly the same, but with different phases or spatial shifts. For the learned B(θ), each element shows regular sine/cosine tuning over θ . See Supplementary for more learned patterns. We further investigate the characteristics of the learned firing patterns of v(x) using measures adopted from the literature of grid cells. Specifically, the hexagonal regularity, scale and orientation of grid-like patterns are quantified using the gridness score, grid scale and grid orientation [26, 32], which are determined by taking a circular sample of the autocorrelogram of the response map. Table 1 summarizes the results of gridness scores and comparisons with other optimization-based approaches [3, 33]. We apply the same threshold to determine whether a learned neuron can be considered a grid cell as in [3] (i.e., gridness score > 0.37). For our model, 73.10% of the learned neurons exhibit significant hexagonal periodicity in terms of the gridness score. Fig. 4 shows the histogram of grid scales of the learned grid cell neurons (mean 0.33, range 0.21 to 0.49), which follows a multi-modal distribution. The ratio between neighboring modes are roughly 1.52 and 1.51, which closely matches the theoretical predictions [39, 36] and also the empirical results from rodent grid cells [37]. Collectively, these results reveal striking, quantitative correspondence between the properties of our model neurons and those of the grid cells in the brain. Connection to continuous attractor neural network (CANN) defined on 2D torus. The fact that the learned response maps of each module are shifted versions of a common hexagon periodic pattern implies that the learned codebook manifold forms a 2D torus, and as the agent moves, the responses of the grid cells undergo a cyclic permutation. This is consistent with the CANN models hand-crafted on 2D torus. See Supplementary for a detailed discussion. Ablation studies. We conduct ablation studies to examine whether certain model assumptions are empirically important for the emergence of hexagon grid patterns. The conclusions are highlighted as follows: (1) The loss term L2 (Eq. (19)) constraining the isotropic scaling condition is necessary for learning hexagon grid patterns. (2) The constraint u(x′)≥ 0 is not necessary for learning hexagon patterns, but the activations can be either excitatory or inhibitory without the constraint. (3) The skew-symmetric assumption on B(θ) is not important for learning hexagon grid pattern. (4) Hexagon patterns always emerge regardless of the choice of block size and number of blocks. (5) Multiple blocks or modules are necessary for the emergence of hexagon grid patterns of multiple scales. See Fig. 5 for several learned patterns and Supplementary for the full studies. 6.2 Path integration We then examine the ability of the learned model on performing multi-step path integration, which can be accomplished by recurrently updating vt (Eq. (12)) and decoding vt to xt for t = 1, ...,T (Eq. (16)). Re-encoding vt ← v(xt) after decoding is adopted. Fig. 6(a) shows an example trajectory of accurate path integration for number of time steps T = 30. As shown in Fig. 6(b), with re-encoding, the path integration error remains close to zero over a duration of 500 time steps (< 0.01 cm, averaged over 1,000 episodes), even if the model is trained with the single-time-step transformation model (Eq. (18)). Without re-encoding, the error goes slight higher but still remains small (ranging from 0.0 to 4.2 cm, mean 1.9 cm in the 1m × 1m environment). Fig. 6(c) summarizes the path integration performance by fixing the number of blocks and altering the block size. The performance of path integration would be improved as the block size becomes larger, i.e., with more neurons in each module. When block size is larger than 16, path integration is very accurate for the time steps tested. Error correction. See Supplementary for numerical experiments on error correction, which show that the learn model is still capable of path integration when we apply Gaussian white noise errors or Bernoulli drop-out errors to vt . 6.3 Additional experiments on path planning and egocentric vision We also conduct additional experiments on path planning and egocentric vision with our model. Path planning can be accomplished by steepest ascent on the adjacency to the target position. For egocentric vision, we learn an extra generator network that generates the visual image given the position encoding formed by the grid cells. See Supplementary for details. 7 Related work Our work is related to several lines of previous research on modeling grid cells. First, RNN models have been used to model grid cells and path integration. The traditional approach uses simulation-based models with hand-crafted connectivity, known as continuous attractor neural network (CANN) [2, 6, 7, 29, 1]. On the other hand, more recently two pioneering papers [9, 3] developed optimization-based RNN approaches to learn the path integration model and discovered that grid-like response patterns can emerge in the optimized networks. These results are further substantiated in [33, 8]. Our work analyzes the properties of the general recurrent model for path integration, and these properties seem to be satisfied by the hand-crafted CANN models. Our method belongs to the scheme of optimization-based approaches, and the learned response maps share similar properties as assumed by the CANN models. Second, our work differs from the PCA-based basis expansion models [13, 33, 35] in that, unlike PCA, we make no assumption about the orthogonality between the basis functions, and the basis functions are generated by the transformation model. Furthermore, in previous basis expansion models [13, 33], place fields with Mexican-hat patterns (with balanced excitatory center and inhibitory surround) had to be assumed in order to obtain hexagonal grid firing patterns. However, experimentally measured place fields in biological brains were instead well characterized by Gaussian functions. Crucially, in our model, hexagonal grids emerge from learning with Gaussian place fields, and there is no need to assume any additional surround mechanisms or difference of Gaussians kernels. In another related paper, [19] proposed matrix representation of 2D self-motion, while our work analyzes general transformations. Our investigation of the special case of linear transformation model reveals the matrix Lie group and the matrix Lie algebra of rotation group. Our work also connects the linear transformation model to the basis expansion model via unitary group representation theory. 8 Conclusion This paper analyzes the recurrent model for path integration calculations by grid cells. We identify a group representation condition and an isotropic scaling condition that give rise to locally conformal embedding of the self-motion. We study a linear prototype model that reveals the matrix Lie group of rotation, and explore the connection between the isotropic scaling condition and hexagon grid patterns. In addition to these theoretical investigations, our numerical experiments demonstrate that our model can learn hexagon grid patterns for the response maps of grid cells, and the learned model is capable of accurate long distance path integration. In this work, the numerical experiments are mostly limited to the linear transformation model, with the exception of an experiment with ReLU non-linearity. We will conduct experiments on the other non-linear transformation models, especially the forms assumed by the hand-crafted continuous attractor neural networks. Moreover, we assume that the agent navigates within a square open-field environment without obstacles or rewards. It is worthwhile to explore more complicated environments, including 3D environment. Acknowledgments and Disclosure of Funding The work was supported by NSF DMS-2015577, ONR MURI project N00014-16-1-2007, DARPA XAI project N66001-17-2-4029, and XSEDE grant ASC170063. We thank Yaxuan Zhu from UCLA Department of Statistics for his help with experiments on egocentric vision. We thank Dr. Wenhao Zhang for sharing his knowledge and insights on continuous attractor neural networks. We thank Sirui Xie for discussions. We thank the three reviewers for their constructive comments.
1. What is the main contribution of the paper regarding grid cells and path integration? 2. What are the concerns regarding the proof of Theorem 3 and its relation to orthogonal assumptions? 3. How does the proposed formulation compare to previous studies on cosine functions and hexagonal patterns? 4. Is there a lack of clarity in the title of Section 3.3? 5. Would providing a theoretical guarantee for generating hexagonal patterns strengthen the paper? 6. Is there confusion regarding the derivation of the third item from the second item in Equation 9?
Summary Of The Paper Review
Summary Of The Paper This paper focuses on the find that how the grid cells perform path integration. Speciafically, this paper introduces the isotropic scaling condition to produce the hexagon firing pattern for grid cells. The optimization experiments show that the proposed formulation of place and grid cells demonstrated this. Review Although I appreciated that the authors provide an interesting formulation for the relationship of place and grid cells, I have the following concerns: The authors claim that they don’t make any orthogonality assumption and the isotropic scaling guarantees the l_2 error in L122. However, in the proof of Theorem 3 of this error bound, explicit orthogonal assumptions are made. I expect the authors to provide, in the rebuttal, a more rigorous proof for this error bound without any orthogonal assumptions. Many literatures have shown that cos/sin functions can produce hexagon patterns, e.g., [a, b]. Can you provide more connections to those results, because I highly suspect that the three conditions introduced in this paper will result in the cosine functions (Maybe the formulation (e.g., the isotropic scaling) in this paper is mathematically equivalent to the cosine functions?), especially after seeing the Fig 7 in Appendix. If this conjecture is correct, then I see no contribution for this paper, since we can simply use the predefined functions as in [a] to do both path integration and generate hexagon. Then what’s the purpose for the optimization? The authors didn’t show that the linear model with isotropic condition has a hexagon grid pattern in the proof, thus the tittle of Sec 3.3 is misleading, “hexagon grid pattern” in the section title should be removed. Theoretic guarantee for generating the hexagon pattern by the introduced formulation will strengthen this paper a lot. It’s not clear how to derive the third item from the second item in Eq (9). [a] Burgess N, Barry C, O'keefe J. An oscillatory interference model of grid cell firing. Hippocampus. 2007 Sep;17(9):801-12. [b] Fuhs MC, Touretzky DS. A spin glass model of path integration in rat medial entorhinal cortex. Journal of Neuroscience. 2006 Apr 19;26(16):4266-76.
NIPS
Title On Path Integration of Grid Cells: Group Representation and Isotropic Scaling Abstract Understanding how grid cells perform path integration calculations remains a fundamental problem. In this paper, we conduct theoretical analysis of a general representation model of path integration by grid cells, where the 2D self-position is encoded as a higher dimensional vector, and the 2D self-motion is represented by a general transformation of the vector. We identify two conditions on the transformation. One is a group representation condition that is necessary for path integration. The other is an isotropic scaling condition that ensures locally conformal embedding, so that the error in the vector representation translates conformally to the error in the 2D self-position. Then we investigate the simplest transformation, i.e., the linear transformation, uncover its explicit algebraic and geometric structure as matrix Lie group of rotation, and explore the connection between the isotropic scaling condition and a special class of hexagon grid patterns. Finally, with our optimization-based approach, we manage to learn hexagon grid patterns that share similar properties of the grid cells in the rodent brain. The learned model is capable of accurate long distance path integration. Code is available at https://github.com/ruiqigao/grid-cell-path. 1 Introduction Imagine walking in the darkness. Purely based on the sense of self-motion, one can gain a sense of self-position by integrating the self motion - a process often referred to as path integration [10, 14, 21, 15, 27]. While the exact neural underpinning of path integration remains unclear, it has been hypothesized that the grid cells [21, 17, 40, 24, 23, 12] in the mammalian medial entorhinal cortex (mEC) may be involved in this process [20, 30, 22]. The grid cells are so named because individual neurons exhibit striking firing patterns that form hexagonal grids when the agent (such as a rat) navigates in a 2D open field [18, 21, 16, 6, 34, 5, 7, 11, 29, 1]. The grid cells also interact with the place cells in the hippocampus [28]. Unlike a grid cell that fires at the vertices of a lattice, a place cell often fires at a single (or a few) locations. The purpose of this paper is to understand how the grid cells may perform path integration calculations. We study a general optimization-based representational model in which the 2D self-position is ∗The author is now a Research Scientist at Google Brain team. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). represented by a higher dimensional vector and the 2D self-motion is represented by a transformation of the vector. The vector representation can be considered position encoding or position embedding, where the elements of the vector may be interpreted as activities of a population of grid cells. The transformation can be realized by a recurrent network that acts on the vector. Our focus is to study the properties of the transformation. Specifically, we identify two conditions for the transformation: a group representation condition and an isotropic scaling condition, under which we demonstrate that the local neighborhood around each self-position in the 2D physical space is embedded conformally as a 2D neighborhood around the vector representation of the self-position in the neural space. We then investigate the simplest special case of the transformation, i.e., linear transformation, that forms a matrix Lie group of rotation, under which case we show that the isotropic scaling condition is connected to a special class of hexagonal grid patterns. Our numerical experiments demonstrate that our model learns clear hexagon grid patterns of multiple scales which share observed properties of the grid cells in the rodent brain, by optimizing a simple loss function. The learned model is also capable of accurate long distance path integration. Contributions. Our work contributes to understanding the grid cells from the perspective of representation learning. We conduct theoretical analysis of (1) general transformation for path integration by identifying two key conditions and a local conformal embedding property, (2) linear transformation by revealing the algebraic and geometric structure and connecting the isotropic scaling condition and a special class of hexagon grid patterns, and (3) integration of linear transformation model and linear basis expansion model via unitary group representation theory. Experimentally we learn clear hexagon grid patterns that are consistent with biological observations, and the learned model is capable of accurate path integration. 2 General transformation 2.1 Position embedding Consider an agent (e.g., a rat) navigating within a 2D open field. Let x = (x1,x2) be the selfposition of the agent. We assume that the selfposition x in the 2D physical space is represented by the response activities of a population of d neurons (e.g., d = 200), which form a vector v(x) = (vi(x), i = 1, ...,d)> in the ddimensional “neural space”, with each element vi(x) representing the firing rate of one neuron when the animal is at location x. v(x) can be called position encoding or position embedding. Collectively, (v(x),∀x) forms a codebook of x ∈ R2, and (v(x),∀x) is a 2D manifold in the d-dimensional neural space, i.e., globally we embed R2 as a 2D manifold in the neural space. Locally, we identify two condi- tions under which the 2D local neighborhood around each x is embedded conformally as a 2D neighborhood around v(x) with a scaling factor. See Fig. 1. As shown in Section 3.3, the conformal embedding is connected to the hexagon grid patterns. 2.2 Transformation and path integration At self-position x, if the agent makes a self-motion ∆x = (∆x1,∆x2), then it moves to x+∆x. Correspondingly, the vector representation v(x) is transformed to v(x+∆x). The general form of the transformation can be formulated as: v(x+∆x) = F(v(x),∆x). (1) The transformation F(·,∆x) can be considered a representation of ∆x, which forms a 2D additive group. We call Eq. (1) the transformation model. It can be implemented by a recurrent network to derive a path integration model: if the agent starts from x0, and makes a sequence of moves (∆xt , t = 1, ...,T ), then the vector is updated by vt = F(vt−1,∆xt), where v0 = v(x0), and t = 1, ...,T . 2.3 Group representation condition The solution to the transformation model (Eq. (1)) should satisfy the following condition. Condition 1. (Group representation condition) (v(x),∀x) and (F(·,∆x),∀∆x) form a representation of the 2D additive Euclidean group R2 in the sense that F(v(x),0) = v(x), ∀x; (2) F(v(x),∆x1 +∆x2) = F(F(v(x),∆x1),∆x2), ∀x,∆x1,∆x2. (3) (F(·,∆x),∀∆x) is a Lie group of transformations acting on the codebook manifold (v(x),∀x). The reason for (2) is that if ∆x = 0, then F(·,0) should be the identity transformation. Thus the codebook manifold (v(x),∀x) consists of fixed points of the transformation F(·,0). If F(·,0) is furthermore a contraction around (v(x),∀x), then (v(x),∀x) are the attractor points. The reason for (3) is that the agent can move in one step by ∆x1 +∆x2, or first move by ∆x1, and then move by ∆x2. Both paths would end up at the same x+∆x1 +∆x2, which is represented by the same v(x+∆x1 +∆x2). The group representation condition is a necessary self-consistent condition for the transformation model (Eq. (1)). 2.4 Egocentric self-motion Self-motion ∆x can also be parametrized egocentrically as (∆r,θ), where ∆r is the displacement along the direction θ ∈ [0,2π], so that ∆x= (∆x1 = ∆r cosθ ,∆x2 = ∆r sinθ). The egocentric self-motion may be more biologically plausible where θ is encoded by head direction, and ∆r can be interpreted as the speed along direction θ . The transformation model then becomes v(x+∆x) = F(v(x),∆r,θ), (4) where we continue to use F(·) for the transformation (with slight abuse of notation). (∆r,θ) form a polar coordinate system around x. 2.5 Infinitesimal self-motion and directional derivative In this subsection, we derive the transformation model for infinitesimal self-motion. While we use ∆x or ∆r to denote finite (non-infinitesimal) self-motion, we use δx or δ r to denote infinitesimal self-motion. At self-position x, for an infinitesimal displacement δ r along direction θ , δx= (δx1 = δ r cosθ ,δx2 = δ r sinθ). See Fig. 1 (a) for an illustration. Given that δ r is infinitesimal, for any fixed θ , a first order Taylor expansion of F(v(x),δ r,θ) with respect to δ r gives us v(x+δx) = F(v(x),δ r,θ) = F(v(x),0,θ)+F ′(v(x),0,θ)δ r+o(δ r) = v(x)+ fθ (v(x))δ r+o(δ r), (5) where F(v(x),0,θ) = v(x) according to Condition 1, and fθ (v(x)) := F ′(v(x),0,θ) is the first derivative of F(v(x),∆r,θ) with respect to ∆r at ∆r = 0. fθ (v(x)) is the directional derivative of F(·) at self-position x and direction θ . For a fixed θ , (F(·,∆r,θ),∀∆r) forms a one-parameter Lie group of transformations, and fθ (·) is the generator of its Lie algebra. 2.6 Isotropic scaling condition With the directional derivative, we define the second condition as follows, which leads to locally conformal embedding and is connected to hexagon grid pattern. Condition 2. (Isotropic scaling condition) For any fixed x, ‖ fθ (v(x))‖ is constant over θ . Let f0(v(x)) denote fθ (v(x)) for θ = 0, and fπ/2(v(x)) denote fθ (v(x)) for θ = π/2. Then we have the following theorem: Theorem 1. Assume group representation condition 1 and isotropic scaling condition 2. At any fixed x, for the local motion δx= (δ r cosθ ,δ r sinθ) around x, let δv = v(x+δx)−v(x) be the change of vector and s = ‖ fθ (v(x))‖, then we have ‖δv‖= s‖δx‖. Moreover, δv = fθ (v(x))δ r+o(δ r) = f0(v(x))δ r cosθ + fπ/2(v(x))δ r sinθ +o(δ r), (6) where f0(v(x)) and fπ/2(v(x)) are two orthogonal basis vectors of equal norm s. See Supplementary for a proof and Fig. 1(b) for an illustration. Theorem 1 indicates that the local 2D polar system around self-position x in the 2D physical space is embedded conformally as a 2D polar system around vector v(x) in the d-dimensional neural space, with a scaling factor s (our analysis is local for any fixed x, and s may depend on x). Conformal embedding is a generalization of isometric embedding, where the metric can be changed by a scaling factor s. If s is globally constant for all x, then the intrinsic geometry of the codebook manifold (v(x),∀x) remains Euclidean, i.e., flat. Why isotropic scaling and conformal embedding? The neurons are intrinsically noisy. During path integration, the errors may accumulate in v. Moreover, when inferring self-position from visual image, it is possible that v is inferred first with error, and then x is decoded from the inferred v. Due to isotropic scaling and conformal embedding, locally we have ‖δv‖= s‖δx‖, which guarantees that the `2 error in v translates proportionally to the `2 error in x, so that there will not be adversarial perturbations in v(x) that cause excessively big errors in x. Specifically, we have the following theorem. Theorem 2. Assume the general transformation model (Eq. (4)) and the isotropic scaling condition. For any fixed x, let s = ‖ fθ (v(x))‖, which is independent of θ . Suppose the neurons are noisy: v = v(x)+ ε , where ε ∼N (0,τ2Id) and d is the dimensionality of v. Suppose the agent infers its 2D position x̂ from v by x̂ = argminx′ ‖v−v(x′)‖2, i.e., v(x̂) is the projection of v onto the 2D manifold formed by (v(x′),∀x′). Then we have E‖x̂−x‖2 = 2τ2/s2. (7) See Supplementary for a proof. Connection to continuous attractor neural network (CANN) defined on 2D torus. The group representation condition and the isotropic scaling condition appear to be satisfied by the CANN models [2, 6, 7, 29, 1] that are typically hand-designed on a 2D torus. See Supplementary for details. 3 Linear transformation After studying the general transformation, we now investigate the linear transformation of v(x), for the following reasons. (1) It is the simplest transformation for which we can derive explicit algebraic and geometric results. (2) It enables us to connect the isotropic scaling condition to a special class of hexagon grid patterns. (3) In Section 4, we integrate it with the basis expansion model, which is also linear in v(x), via unitary group representation theory. For finite (non-infinitesimal) self-motion, the linear transformation model is: v(x+∆x) = F(v(x),∆x) =M(∆x)v(x), (8) where M(∆x) is a matrix. The group representation condition becomes M(∆x1 +∆x2)v(x) = M(∆x2)M(∆x1)v(x), i.e., M(∆x) is a matrix representation of self-motion ∆x, and M(∆x) acts on the coding manifold (v(x),∀x)). For egocentric parametrization of self-motion (∆r,θ), we can further write M(∆x) =Mθ (∆r) for ∆x = (∆r cosθ ,∆r sinθ), and the linear model becomes v(x+∆x) = F(v(x),∆r,θ) =Mθ (∆r)v(x). 3.1 Algebraic structure: matrix Lie algebra and Lie group For the linear model (Eq. (8)), the directional derivative is: fθ (v(x)) = F ′(v(x),0,θ) = M ′θ (0)v(x) =B(θ)v(x), where B(θ) =M ′ θ (0), which is the derivative of Mθ (∆r) with respect to ∆r at 0. For infinitesimal self-motion, the transformation model in Eq. (5) becomes v(x+δx) = (I+B(θ)δ r)v(x)+o(δ r), (9) where I is the identity matrix. It can be considered a linear recurrent network where B(θ) is the learnable weight matrix. We have the following theorem for the algebraic structure of the linear transformation. Theorem 3. Assume the linear transformation model so that for infinitesimal self-motion (δ r,θ), the model is in the form of Eq. (9), then for finite displacement ∆r, v(x+∆x) =Mθ (∆r)v(x) = exp(B(θ)∆r)v(x). (10) Proof. We can divide ∆r into N steps, so that δ r = ∆r/N→ 0 as N→ ∞, and v(x+∆x) = (I+B(θ)(∆r/N)+o(1/N))Nv(x)→ exp(B(θ)∆r)v(x) (11) as N→ ∞. The matrix exponential map is defined by exp(A) = ∑∞n=0 An/n!. The above math underlies the relationship between matrix Lie algebra and matrix Lie group in general [38]. For a fixed θ , the set of Mθ (∆r) = exp(B(θ)∆r) for ∆r ∈ R forms a matrix Lie group, which is both a group and a manifold. The tangent space of Mθ (∆r) at identity I is called matrix Lie algebra. B(θ) is the basis of this tangent space, and is often referred to as the generator matrix. Path integration. If the agent starts from x0, and make a sequence of moves ((∆rt ,θt), t = 1, ...,T ), then the vector representation of self-position is updated by vt = exp(B(θt)∆rt)vt−1, (12) where v0 = v(x0), and t = 1, ...,T . Approximation to exponential map. For a finite but small ∆r, exp(B(θ)∆r) can be approximated by a second-order (or higher-order) Taylor expansion exp(B(θ)∆r) = I+B(θ)∆r+B(θ)2∆r2/2+o(∆r2). (13) 3.2 Geometric structure: rotation, periodicity, metic and error correction If we assume B(θ) = −B(θ)>, i.e., skew-symmetric, then I +B(θ)δ r in Eq. (9) is a rotation matrix operating on v(x), due to the fact that (I+B(θ)δ r)(I+B(θ)δ r)> = I+O(δ r2). For finite ∆r, exp(B(θ)∆r) is also a rotation matrix, as it equals to the product of N matrices I+B(θ)(∆r/N) (Eq. (11)). The geometric interpretation is that, if the agent moves along the direction θ in the physical space, the vector v(x) is rotated by the matrix B(θ) in the neural space, while the `2 norm ‖v(x)‖2 remains fixed. We may interpret ‖v(x)‖2 = ∑di=1 vi(x)2 as the total energy of grid cells. See Fig. 1(b). The angle of rotation is given by ‖B(θ)v(x)‖δ r/‖v(x)‖, because ‖B(θ)v(x)‖δ r is the arc length and ‖v(x)‖ is the radius. If we further assume the isotropic scaling condition, which becomes that ‖ fθ (v(x))‖= ‖B(θ)v(x)‖ is constant over θ for the linear model, then the angle of rotation can be written as µδ r, where µ = ‖B(θ)v(x)‖/‖v(x)‖ is independent of θ . Geometrically, µ tells us how fast the vector rotates in the neural space as the agent moves in the physical space. In practice, µ can be much bigger than 1 for the learned model, thus the vector can rotate back to itself in a short distance, causing the periodic patterns in the elements of v(x). µ captures the notion of metric. For µ 1, the conformal embedding in Fig. 1 (b) magnifies the local motion in Fig. 1 (a), and this enables error correction [34]. More specifically, we have the following result, which is based on Theorem 2. Proposition 1. Assume the linear transformation model (Eq. (9)) and the isotropic scaling condition 2. For any fixed x, let µ = ‖B(θ)v(x)‖/‖v(x)‖. Suppose v= v(x)+ε , where ε ∼N (0,τ2Id) and τ2 = α2(‖v(x)‖2/d), so that α2 measures the variance of noise relative to the average magnitude of (vi(x)2, i= 1, ...,d). Suppose the agent infers its 2D position x̂ from v by x̂= argminx′ ‖v−v(x′)‖2. Then we have E‖x̂−x‖2 = 2α2/(µ2d). (14) See Supplementary for a proof. By the above proposition, error correction of grid cells is due to two factors: (1) higher dimensionality d of v(x) for encoding 2D positions x, and (2) a magnifying µ 1 (our analysis is local for any fixed x, and µ may depend on x). 3.3 Hexagon grid patterns formed by mixing Fourier waves In this subsection, we make connection between the isotropic scaling condition 2 and a special class of hexagon grid patterns created by linearly mixing three Fourier plane waves whose directions are 2π/3 apart. We show such linear mixing satisfies the linear transformation model and the isotropic scaling condition. Theorem 4. Let e(x) = (exp(i〈a j,x〉), j = 1,2,3)>, where (a j, j = 1,2,3) are three 2D vectors of equal norm, and the angle between every pair of them is 2π/3. Let v(x) =Ue(x), where U is an arbitrary unitary matrix. Let B(θ) =U ∗D(θ)U , where D(θ) = diag(i〈a j,q(θ)〉, j = 1,2,3), with q(θ) = (cosθ ,sinθ)>. Then (v(x),B(θ)) satisfies the linear transformation model (Eq. (9)) and the isotropic scaling condition 2. Moreover, B(θ) is skew-symmetric. See Supplementary for a proof. We would like to emphasize that the above theorem analyzes a special case solution to our linear transformation model, but our optimization-based learning method does not assume any superposition of Fourier basis functions as in the theorem. Our experimental results are learned purely by optimizing a loss function based on the simple assumptions of our model with generic vectors and matrices. We leave it to future work to theoretically prove that the isotropic scaling condition leads to hexagon grid patterns in either the general transformation model or the linear transformation model. The hexagon grid patterns are not limited to superpositions of three plane waves as in the above theorem. 3.4 Modules Biologically, it is well established that grid cells are organized in discrete modules [4, 37] or blocks. We thus partition the vector v(x) into K blocks, v(x) = (vk(x),k = 1, ...,K). Correspondingly the generator matrices B(θ) = diag(Bk(θ),k = 1, ...,K) are block diagonal, so that each sub-vector vk(x) is rotated by a sub-matrix Bk(θ). For the general transformation model, each sub-vector is transformed by a separate sub-network. By the same argument as in Section 3.2, let µk = ‖Bkvk(x)‖/‖vk(x)‖, then µk is the metric of module k. 4 Interaction with place cells 4.1 Place cells For each v(x), we need to uniquely decode x globally. This can be accomplished via interaction with place cells. Specifically, each place cell fires when the agent is at a specific position. Let A(x,x′) be the response map of the place cell associated with position x′. It measures the adjacency between x and x′. A commonly used form of A(x,x′) is the Gaussian adjacency kernel A(x,x′) = exp(−‖x−x′‖2/(2σ2)). The set of Gaussian adjacency kernels serve as inputs to our optimizationbased method to learn grid cells. 4.2 Basis expansion A popular model that connects place cells and grid cells is the following basis expansion model (or PCA-based model) [13]: A(x,x′) = 〈v(x),u(x′)〉= d ∑ i=1 ui,x′vi(x), (15) where v(x) = (vi(x), i = 1, ...,d)>, and u(x′) = (ui,x′ , i = 1, ...,d)>. Here (vi(x), i = 1, ...,d) forms a set of d basis functions (which are functions of x) for expanding A(x,x′) (which is a function of x for each place x′), while u(x′) is the read-out weight vector for place cell at x′ and needs to be learned. See Fig. 2 for an illustration. Experimental results on biological brains have shown that the connections from grid cells to place cells are excitatory [42, 31]. We thus assume that ui,x′ ≥ 0 for all i and x′. 4.3 From group representation to basis functions The vector representation v(x) generated (or constrained) by the linear transformation model (Eq. (8)) can serve as basis functions of the PCA-based basis expansion model (Eq. (15)), due to the fundamental theorems of Schur [41] and Peter-Weyl [38], which reveal the deep root of Fourier analysis and generalize it to general Lie groups. Specifically, if M(∆x) is an irreducible unitary representation of ∆x that forms a compact Lie group, then the elements {Mi j(∆x)} form a set of orthogonal basis functions of ∆x. Let v(x) = M(x)v(0) (where we choose the origin 0 as the reference point). The elements of v(x), i.e., (vi(x), i = 1, ...,d), are linear mixings of the basis functions {Mi j(x)}, so that they themselves form a new set of basis functions that serve to expand (A(x,x′),∀x′) that parametrizes the place cells. Thus group representation in our path integration model is a perfect match to the basis expansion model, in the sense that the basis functions are results of group representation. The basis expansion model (or PCA-based model) (Eq. (15)) assumes that the basis functions are orthogonal, whereas in our work, we do not make the orthogonality assumption. Interestingly, the learned transformation model generates basis functions that are close to being orthogonal automatically. See Supplementary for more detailed explanation and experimental results. 4.4 Decoding and re-encoding For a neural response vector v, such as vt in Eq. (12), the response of the place cell associated with location x′ is 〈v,u(x′)〉. We can decode the position x̂ by examining which place cell has the maximal response, i.e., x̂= argmax x′ 〈v,u(x′)〉. (16) After decoding x̂, we can re-encode v← v(x̂) for error correction. Decoding and re-encoding can also be done by directly projecting v onto the manifold (v(x),∀x), which gives similar results. See Supplementary for more analysis and experimental results. 5 Learning We learn the model by optimizing a loss function defined based on three model assumptions discussed above: (1) the basis expansion model (Eq. (15)), (2) the linear transformation model (Eq. (10)) and (3) the isotropic scaling condition 2. The input is the set of adjacency kernels A(x,x′),∀x,x′. The unknown parameters to be learned are (1) (v(x) = (vk(x),k = 1, ...,K),∀x), (2) (u(x′),∀x′) and (3) (B(θ),∀θ). We assume that there are K modules or blocks and B(θ) is skew-symmetric, so that B(θ) are parametrized as block-diagonal matrices (Bk(θ),k = 1, ...,K),∀θ) and only the lower triangle parts of the matrices need to be learned. The loss function is defined as a weighted sum of simple `2 loss terms constraining the three model assumptions: L = L0 +λ1L1 +λ2L2, where L0 = Ex,x′ [A(x,x′)−〈v(x),u(x′)〉]2, (basis expansion) (17) L1 = K ∑ k=1 Ex,∆x‖vk(x+∆x)− exp(Bk(θ)∆r)vk(x)‖2, (transformation) (18) L2 = K ∑ k=1 Ex,θ ,∆θ [‖Bk(θ +∆θ)vk(x)‖−‖Bk(θ)vk(x)‖]2. (isotropic scaling) (19) In L1, ∆x = (∆r cosθ ,∆r sinθ). λ1 and λ2 are chosen so that the three loss terms are of similar magnitudes. A(x,x′) are given as Gaussian adjacency kernels. For regularization, we add a penalty on ‖u(x′)‖2, and further assume u(x′)≥ 0 so that the connections from grid cells to place cells are excitatory [42, 31]. However, note that u(x′)≥ 0 is not necessary for the emergence of hexagon grid patterns as shown in the ablation studies. Expectations in L0, L1 and L2 are approximated by Monte Carlo samples. L is minimized by Adam [25] optimizer. See Supplementary for implementation details. It is worth noting that, consistent with the experimental observations, we assume individual place field A(x,x′) to exhibit a Gaussian shape, rather than a Mexican-hat pattern (with balanced excitatory center and inhibitory surround) as assumed in previous basis expansion models [13, 33] of grid cells. ReLU non-linearity. We also experiment with a non-linear transformation model where a ReLU activation is added. See Supplementary for details. 6 Experiments We conduct numerical experiments to learn the representations as described in Section 5. Specifically, we use a square environment with size 1m × 1m, which is discretized into a 40× 40 lattice. For direction, we discretize the circle [0,2π] into 144 directions and use nearest neighbor linear interpolations for values in between. We use the second-order Taylor expansion (Eq. (13)) to approximate the exponential map exp(B(θ)∆r). The displacement ∆r are sampled within a small range, i.e., ∆r is smaller than 3 grids on the lattice. For A(x,x′), we use a Gaussian adjacency kernel with σ = 0.07. v(x) is of d = 192 dimensions, which is partitioned into K = 16 modules, each of which has 12 cells. 6.1 Hexagon grid patterns Fig. 3 shows the learned firing patterns of v(x) = (vi(x), i = 1, ...,d) over the 40×40 lattice of x. Every row shows the learned units belonging to the same block or module. Regular hexagon grid patterns emerge. Within each block or module, the scales and orientations are roughly the same, but with different phases or spatial shifts. For the learned B(θ), each element shows regular sine/cosine tuning over θ . See Supplementary for more learned patterns. We further investigate the characteristics of the learned firing patterns of v(x) using measures adopted from the literature of grid cells. Specifically, the hexagonal regularity, scale and orientation of grid-like patterns are quantified using the gridness score, grid scale and grid orientation [26, 32], which are determined by taking a circular sample of the autocorrelogram of the response map. Table 1 summarizes the results of gridness scores and comparisons with other optimization-based approaches [3, 33]. We apply the same threshold to determine whether a learned neuron can be considered a grid cell as in [3] (i.e., gridness score > 0.37). For our model, 73.10% of the learned neurons exhibit significant hexagonal periodicity in terms of the gridness score. Fig. 4 shows the histogram of grid scales of the learned grid cell neurons (mean 0.33, range 0.21 to 0.49), which follows a multi-modal distribution. The ratio between neighboring modes are roughly 1.52 and 1.51, which closely matches the theoretical predictions [39, 36] and also the empirical results from rodent grid cells [37]. Collectively, these results reveal striking, quantitative correspondence between the properties of our model neurons and those of the grid cells in the brain. Connection to continuous attractor neural network (CANN) defined on 2D torus. The fact that the learned response maps of each module are shifted versions of a common hexagon periodic pattern implies that the learned codebook manifold forms a 2D torus, and as the agent moves, the responses of the grid cells undergo a cyclic permutation. This is consistent with the CANN models hand-crafted on 2D torus. See Supplementary for a detailed discussion. Ablation studies. We conduct ablation studies to examine whether certain model assumptions are empirically important for the emergence of hexagon grid patterns. The conclusions are highlighted as follows: (1) The loss term L2 (Eq. (19)) constraining the isotropic scaling condition is necessary for learning hexagon grid patterns. (2) The constraint u(x′)≥ 0 is not necessary for learning hexagon patterns, but the activations can be either excitatory or inhibitory without the constraint. (3) The skew-symmetric assumption on B(θ) is not important for learning hexagon grid pattern. (4) Hexagon patterns always emerge regardless of the choice of block size and number of blocks. (5) Multiple blocks or modules are necessary for the emergence of hexagon grid patterns of multiple scales. See Fig. 5 for several learned patterns and Supplementary for the full studies. 6.2 Path integration We then examine the ability of the learned model on performing multi-step path integration, which can be accomplished by recurrently updating vt (Eq. (12)) and decoding vt to xt for t = 1, ...,T (Eq. (16)). Re-encoding vt ← v(xt) after decoding is adopted. Fig. 6(a) shows an example trajectory of accurate path integration for number of time steps T = 30. As shown in Fig. 6(b), with re-encoding, the path integration error remains close to zero over a duration of 500 time steps (< 0.01 cm, averaged over 1,000 episodes), even if the model is trained with the single-time-step transformation model (Eq. (18)). Without re-encoding, the error goes slight higher but still remains small (ranging from 0.0 to 4.2 cm, mean 1.9 cm in the 1m × 1m environment). Fig. 6(c) summarizes the path integration performance by fixing the number of blocks and altering the block size. The performance of path integration would be improved as the block size becomes larger, i.e., with more neurons in each module. When block size is larger than 16, path integration is very accurate for the time steps tested. Error correction. See Supplementary for numerical experiments on error correction, which show that the learn model is still capable of path integration when we apply Gaussian white noise errors or Bernoulli drop-out errors to vt . 6.3 Additional experiments on path planning and egocentric vision We also conduct additional experiments on path planning and egocentric vision with our model. Path planning can be accomplished by steepest ascent on the adjacency to the target position. For egocentric vision, we learn an extra generator network that generates the visual image given the position encoding formed by the grid cells. See Supplementary for details. 7 Related work Our work is related to several lines of previous research on modeling grid cells. First, RNN models have been used to model grid cells and path integration. The traditional approach uses simulation-based models with hand-crafted connectivity, known as continuous attractor neural network (CANN) [2, 6, 7, 29, 1]. On the other hand, more recently two pioneering papers [9, 3] developed optimization-based RNN approaches to learn the path integration model and discovered that grid-like response patterns can emerge in the optimized networks. These results are further substantiated in [33, 8]. Our work analyzes the properties of the general recurrent model for path integration, and these properties seem to be satisfied by the hand-crafted CANN models. Our method belongs to the scheme of optimization-based approaches, and the learned response maps share similar properties as assumed by the CANN models. Second, our work differs from the PCA-based basis expansion models [13, 33, 35] in that, unlike PCA, we make no assumption about the orthogonality between the basis functions, and the basis functions are generated by the transformation model. Furthermore, in previous basis expansion models [13, 33], place fields with Mexican-hat patterns (with balanced excitatory center and inhibitory surround) had to be assumed in order to obtain hexagonal grid firing patterns. However, experimentally measured place fields in biological brains were instead well characterized by Gaussian functions. Crucially, in our model, hexagonal grids emerge from learning with Gaussian place fields, and there is no need to assume any additional surround mechanisms or difference of Gaussians kernels. In another related paper, [19] proposed matrix representation of 2D self-motion, while our work analyzes general transformations. Our investigation of the special case of linear transformation model reveals the matrix Lie group and the matrix Lie algebra of rotation group. Our work also connects the linear transformation model to the basis expansion model via unitary group representation theory. 8 Conclusion This paper analyzes the recurrent model for path integration calculations by grid cells. We identify a group representation condition and an isotropic scaling condition that give rise to locally conformal embedding of the self-motion. We study a linear prototype model that reveals the matrix Lie group of rotation, and explore the connection between the isotropic scaling condition and hexagon grid patterns. In addition to these theoretical investigations, our numerical experiments demonstrate that our model can learn hexagon grid patterns for the response maps of grid cells, and the learned model is capable of accurate long distance path integration. In this work, the numerical experiments are mostly limited to the linear transformation model, with the exception of an experiment with ReLU non-linearity. We will conduct experiments on the other non-linear transformation models, especially the forms assumed by the hand-crafted continuous attractor neural networks. Moreover, we assume that the agent navigates within a square open-field environment without obstacles or rewards. It is worthwhile to explore more complicated environments, including 3D environment. Acknowledgments and Disclosure of Funding The work was supported by NSF DMS-2015577, ONR MURI project N00014-16-1-2007, DARPA XAI project N66001-17-2-4029, and XSEDE grant ASC170063. We thank Yaxuan Zhu from UCLA Department of Statistics for his help with experiments on egocentric vision. We thank Dr. Wenhao Zhang for sharing his knowledge and insights on continuous attractor neural networks. We thank Sirui Xie for discussions. We thank the three reviewers for their constructive comments.
1. What is the main contribution of the paper regarding grid cells and 2D navigation? 2. How does the reviewer assess the presentation and structure of the paper? 3. Are there any minor flaws in the theoretical sections that need to be addressed? 4. Why does the approach restrict itself to 2D environments, and could it be expanded to more general Euclidean domains? 5. What do the results show regarding the emergence of hexagonal patterns and how do they validate the proposed method and assumptions? 6. Would it be beneficial to include baseline comparisons for path integration experiments and show empirically the ability for more general group transformations? 7. Does the ablation study adequately assess when hexagonal patterns emerge? 8. Should the condition of F(v, 0, Phi) = v be stated as part of "Condition 1"? 9. Do the embeddings v(x) being on a Lie group manifold characterized by rotation matrices pose an issue for modeling 2D trajectory prediction in Euclidean space?
Summary Of The Paper Review
Summary Of The Paper The paper introduces a general framework for modeling 2D navigation via grid cells. The main idea is to embed a 2D position x into a higher-dimensional positional encoding v(x) and model 2D translations via group transformations in this embedding space. A specific case are linear transformations via rotation matrices that act on the embedding v(x). Under this assumption, it can be shown that hexagonal patterns which are characteristic for neural grid cells naturally emerge as an optimal encoding. Review Presentation: According to the submission history, the ICML AC had concerns about the exposition and presentation of the paper. The presentation of the paper in its current form is very clear. I find the structure of the methods part adequate (Sec. 2 - Sec. 5) and easy to follow. Quality of theoretical contribution/mathematical validity: Overall, the theory sections are mathematically sound and the stated assumptions are reasonable (l. 82, l. 107), save for a number of minor flaws. It should be straightforward to address these issues in a minor revision: The defined set of transformations F are intimately related to the standard mathematic concept of a "flow". Curiously, the authors do not mention this connection. This would also provide a stronger justification for Condition 1 (l. 82). Why does the angle Phi not depend on positional differential dx. In Eq. (5), the expansion of dv would envolve both dr and dPhi. If this is an implicit requirement, it should be stated as an explicit assumption. What is the motivation for restricting the approach to 2D environments? One could easily conceive of a formulation for more general Euclidean domains, like R^3. Most of the theory should be analogous. Results: It is fascinating that the hexagonal patterns emerge from the optimization framework. It's also a strong justification for the validity of the proposed method and assumptions. Furthermore, this effect is validated quantitatively in Table 1. It is a little disappointing that all evaluations focus on linear transformation models. One of the motivations of the proposed framework is that it allows for more general group transformations. This should be shown empirically as a proof of concept. The path integration experiments lack baseline comparisons which makes it hard to put these results into context. I would like to see comparisons to recurrent [3] and PCA based [33] models in Figure 6. The presented ablation study (l. 289) is useful to assess under which circumstances the hexagonal patterns emerge. Minor remarks: I believe that the condition of F(v, 0, Phi) = v (l. 101) is central to the proposed framework and should be stated as a part of "Condition 1" (l. 82). The embeddings v(x) lie on the Lie group manifold characterized by rotation matrices. This means that they are contained in an elliptic space with positive curvature. This seems counterintuitive, since the goal is to model 2D trajectory prediction in Euclidean space. E.g. there might be group transformations, such that 3 consecutive steps with 90 degree turns between them yield a closed loop. Is this a phenomenon you encountered or did you find that this effect is relevant in practice?
NIPS
Title Coupled Segmentation and Edge Learning via Dynamic Graph Propagation Abstract Image segmentation and edge detection are both central problems in perceptual grouping. It is therefore interesting to study how these two tasks can be coupled to benefit each other. Indeed, segmentation can be easily transformed into contour edges to guide edge learning. However, the converse is nontrivial since general edges may not always form closed contours. In this paper, we propose a principled end-to-end framework for coupled edge and segmentation learning, where edges are leveraged as pairwise similarity cues to guide segmentation. At the core of our framework is a recurrent module termed as dynamic graph propagation (DGP) layer that performs message passing on dynamically constructed graphs. The layer uses learned gating to dynamically select neighbors for message passing using max-pooling. The output from message passing is further gated with an edge signal to refine segmentation. Experiments demonstrate that the proposed framework is able to let both tasks mutually improve each other. On Cityscapes validation, our best model achieves 83.7% mIoU in semantic segmentation and 78.7% maximum F-score in semantic edge detection. Our method also leads to improved zero-shot robustness on Cityscapes with natural corruptions (Cityscapes-C). 1 Introduction Image segmentation and edge detection have been widely studied as important perception problems. The two problems are closely related. In fact, segmentation subsumes edge detection since any segmentation contour makes a closed boundary of a region. The converse is however not true since general edges do not always form closed contours. Nevertheless, edge detection can serve as an auxiliary task to improve segmentation performance since edges provide important pairwise similarity cues for segmentation. Early works tend to focus on the grouping and contrast of pixels from a perceptual similarity perspective. Martin et al. [1] proposed the Berkeley Segmentation Dataset, a popular benchmark for segmentation and boundary detection that inspired many impactful works in perceptual grouping [2–5]. The recent surge of deep learning renders powerful representations with learned features using convolutional neural networks (CNNs) [6]. This has led to great advances in both areas [7–12], but the two tasks are often considered separately. In light of the status quo, we consider coupled edge and segmentation learning. Our goal is twofold: (1) Multi-task learning - being able to produce high quality edge detection and segmentation. (2) Mutual improvement - the two tasks can help each other with non-trivial performance gains. Designing a principled framework is however nontrivial. The key question is how sparse edge signals can be effectively transformed into dense region-level ones to interact with segmentation. To this end, we propose a learnable recurrent message passing layer where semantic edges are considered ∗Equal contribution. Correspondence to Zhiding Yu <[email protected]>. †Work partially done during an internship at NVIDIA. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). as explicitly learned gating signals to refine segmentation. An overview of our framework is shown in Fig. 1. Specifically, the dynamic message passing layer uses affinity gates to select the neighbor for message passing using max-pooling. It conducts message passing sweeps in each of the four directions: left to right, right to left, top to bottom and bottom to top. The message passing is jointly gated by both the affinity and edge gates, therefore allowing edge cues to naturally influence long-range dense predictions. As such, our framework presents a context module that is clean, compact yet powerful. Our technical contributions can be summarized as follows: • We formulate recurrent message passing as dynamic graph propagation (DGP). We show that such a formulation simplifies the required normalization in propagation networks [13]. It also dynamically finds graph structures that encodes pixel affinities and improves dense prediction. • We propose a double-gate design where message passing is jointly gated by both the affinity and edge gates to refine the segmentation feature map. We show that this design together with the dynamic 1-way connection in DGP better couples segmentation and edge learning. • We obtain state-of-the-art results on joint semantic segmentation and edge detection. We also show that DGP leads to strong zero-shot robustness to natural corruptions with significant improvement over prior methods on Corrupted Cityscapes (Cityscapes-C). Multitasking segmentation and edge learning is desirable for several reasons: 1) There are many downstream applications where both are needed, such as occlusion reasoning [14], localization [15], proposal generation [16–19] and conditional generation [20]. 2) There are many challenging cases where segmentation quality is poor but edge quality is far more superior, e.g. segmentation tends to be inferior near object boundaries since they are often optimized for IoU rather than precision [21]. In these cases, edge learning can potentially capture details missed by segmentation. 3) Implicitly improved model generalization as a result of the coupled learning [22]. 2 Related Work Semantic Segmentation. There is a rich set of prior work in semantic segmentation. Long et al., [7] proposed fully convolutional end-to-end training of semantic segmentation and pointed out its connection to recognition [6]. Chen et al. [8] introduced atrous convolution and atrous spatial pyramid pooling (ASPP) to capture multi-scale image contexts. Another contemporary work is the wide ResNet-38 which explores a relatively shallower but wider backbone [23]. It has also been shown that context plays an important role in segmentation, including context encoding [24, 25], multi-scale context [26–28] as well as relational context [29, 30]. More recently, there has been a surge of interests in segmentation with Transformers [31–34] Boundary/Edge Detection. Similar to segmentation, boundary/edge detection have been widely studied as perceptual grouping problems in early literature [1–3]. Recent methods tend to resort to CNNs. For example, Bertasius et al. proposed a multi-scale deep network architecture for top-down contour detection, where as Xie et al. [11] further introduced holistically-nested edge detection (HED) for end-to-end edge learning. Besides detecting binary edges, Hariharan et al. [35] proposed the Semantic Boundaries Dataset (SBD) which has become a popular benchmark for semantic edge detection. Compared to binary edge detection, semantic edge detection involves the semantic classification of edge pixels in addition to localization which presents additional challenges to existing frameworks. A series of works including HFL [36], CASENet [12], SEAL [37] and STEAL [38] have followed up [35] and pushed the boundaries. Multi-task Segmentation and Edge Learning. Multi-task segmentation and edge learning remains under-studied but is not entirely new. Edges have been pairwise similarity cues to improve segmentation [3, 39, 18] and superpixels [4]. It was shown that edges can be transformed into dense regions through the Laplacian Eigenmaps of boundary transformed affinity [36, 5, 40]. Despite being robust, these methods are generally slow and not end-to-end trainable. For more recent CNN based models, end-to-end multi-task learning on a shared backbone is a natural choice [41]. In addition, proper regularization between segmentation and edge has been shown to improve the performance of both tasks. For example, Takikawa et al. [21] use softmax with temperature to impose consistency between segmentation and semantic edges. Zhen et al. [42] correlate semantic segmentation and edge detection tasks with a consistency loss. Our work goes beyond multi-tasking and let the two task more deeply coupled through dynamic graph propagation. Structured Dense Prediction. Segmentation is a dense prediction task where structured information can be useful. Structured prediction models such as Markov random fields (MRFs) [43], conditional random fields (CRFs) [44–46] and energy minimization [47, 48] have widely proved helpful in segmentation problems by imposing contrast-sensitive smoothness. In addition, structured inference can also be unfolded as network layers for end-to-end training [49, 50], therefore combining the advantages from both ends. Recently, there is an increasing trend to directly model the message passing process itself using multi-dimensional RNN [51–53], graph RNN [54, 55] and spatial propagation network (SPN) [13, 56, 57]. A common advantage is that these methods render more context-aware prediction with larger receptive fields while preserving local details similar to CRFs. Their ability to train end-to-end allows more powerful representation of inter-pixel similarities and thus better dense prediction quality. Our work can be broadly categorized into this category. It is worth mentioning that the contrast sensitive smoothness term in CRF and propagation networks [52, 13] is an implicit modeling of edge signals by learning to relax smoothness constraints at high contrast areas. There have also been variants of SPN [58] that take binary edge as gate regularization. However, none of these works have explicitly addressed multi-tasking learning with category-aware edges whereas the proposed DGP framework presents a novel and effective solution to this problem. 3 Multi-task network As a first step towards coupled segmentation and edge learning, we introduce two novel multi-task networks that are able to perform multi-task learning for both segmentation and edge detection. An overview of their architectures is shown in Fig. 2. The rest of the paper follows these notations and covers more details for each module. Backbone. A CNN giving a semantic feature map (denoted as F ) encoding the segmentation information. There could be multiple choices of the backbone networks, ranging from the popular architectures of DeepLabv2 [8], DeepLabv3 [27] to latest state-of-the-arts such as DeepLabv3+ [28] and Dual-Attention Network [30]. Adopting powerful backbones will surely benefit the system-level performance, but the major purpose of this work does not completely lie in achieving state-of-the-arts. We are more interested in showing the effectiveness of a proposed framework on standard backbones. To this end, we consider both CASENet [12] and ResNet-38 [23] as the standard backbones for benchmarking. Both of them comprehensively covers the feature map resolutions of 1, 1/2, 1/4 and 1/8, with the last resolution being a rule of thumb adopted by many segmentation networks. For both networks, we adopt atrous spatial pyramid pooling layers with dilation rate [6, 12, 18, 24, 30, 36] to capture context information. We set the output channels to be 128 to produce a semantic feature map, fowllowed by either direct segmentation prediction (baselines) or the proposed DGP layer. Affinity stream. Convolution layers that aggregate the cross-layer information of the backbone and produces an affinity feature map (denoted as A) to encode the affinity of pairwise neighboring pixels. This stream is only used with the presence of recurrent message passing layer (including its related baselines). The stream aims to model the simialrity of pair-wise neighboring pixels and serve as a major input to the gating signal in recurrent message passing layer. Compared to edge stream, the affinity stream seeks to capture coarser-level inter-pixel affinity with slightly deeper convolutions and higher dimensions. The resulting feature map A is a 256 dimensional tensor concatenated by ASPP and the side convolutional feature maps from the affinity stream. Edge stream. Dense skip-layer features with abundant details that are combined with edge classification layer through shared concatenation [12] to produce a semantic edge map (denoted as E). We design an improved edge stream over CASENet [12] to better leverage detail information at multiple scales/levels with dense side connections. Unlike CASENet where the bottom blocks only provide 1-channel side features, we densely obtain side features from every sub-block residual modules. In CASENet, this gives side features with a total number of 31 dimensions. Similar rules apply to ResNet-38, where the side feature has a total number of 17 dimensions We also found that applying sigmoid after side features with resolution 1/8 greatly benefits edge training from two aspects: (1) It helps to stabilize and prevent gradient explosions from dense skip connections. (2) It removes the typical aliasing edge issue caused by upsampling/deconvolution, and produces elegant smooth edges. Even though we do not explicitly apply techniques such as edge alignment [37] in this work, the proposed backbone is able to produce high quality edge predictions under noisy labels. We also notice that it is better to remove sigmoid for side features with higher resolutions. Edge prediction is multi-tasked alongside with ASPP using 1× 1 convolution on top of Res Block5. This returns the K classes coarse edge predictions. Similar to CASENet, we upsample all side features together with the K-dimensional edge predictions to full image resolution and apply shared concatenation, where side features are repetitively shared and concatenated with each class for K times, followed by a K-way 3× 3 group convolution to produce semantic edges. Dynamic graph propagation. Learnable recurrent message passing layer that takes the above three branches as input to produce a refined semantic feature map (denoted as F ∗). More details regarding this module will be introduced in the next section. 4 Coupled segmentation and edge learning Although the multi-task network is able to jointly produce segmentation and edge prediction, we are interested in letting these two tasks better coupled to mutually improve each other. To this end, we look into recent spatial propagation networks where edges can be transformed into gating signals that produce long range influence to segmentation. We first give the notations and definitions: 4.1 Notations and Settings We start with a two-dimensional recurrent network architecture with a linear propagation module passing messages (memories) spatially over a feature map. We define a 4-way message passing: 1. Left→ Right (Mode L), Right→Left (Mode R), Top→Bottom (Mode T) and Bottom→Top (Mode B). Each way of message passing will separately generate a hidden state map H which can be approximately viewed as a refined (smoothed) version of F . Finally, we take an element-wise max operation to ensemble them where the model automatically selects the optimal direction with highest neural activation for each pixel: F ∗ ←H , max (HL, HR, HT , HB) (1) To perform message passing in each way, one often needs to pre-define a graph that encodes the message passing paths. For tractability and and computation issues, such graph is often sparsely defined with locally connections between immediate neighboring pixels. We follow spatial propagation network (SPN) [13] by defining a three-way local connection, as illustrated on the hand side of Fig. 3. The advantage of such design is obvious: Taking Mode R propagation as an example, one just needs to initiate the message passing from the right most column, and recurrently pass the message from right to left column by column. In this case, the hidden state of each pixel is directly influenced by the three immediate neighboring pixels on the right column. Details of the propagation under Mode R is illustrated on the left hand side of Fig. 3. Let F ∈ RH×W×C be the feature map input to the propagation module, H ∈ RH×W×C the propagation latent space on top of F . In addition, let hi,t and fi,t denote the hidden state and feature of the i th pixel located at recurrence t3 on H and F , respectively. We denote {pki,t|k ∈ N(i, t)} the set of learnable propagation gating weights for hidden state h(i, t) where N(i, t) is the set of neighbors of pixel (i, t). In this case, the spatial propagation in each mode is defined as: hi,t = ( 1− ∑ k∈N(i,t) pki,t ) fi,t + ∑ k∈N(i,t) pki,t hk,t−1 (2) where is element-wise product, and hk,t−1 is the hidden state of pixel (i, t)’s neighbors from the previous recurrence. {pki,t|k ∈ N(i, t)} is expandable to an affinity matrix, revealing the global and dense message passing among all the pixels of F . 4.2 Dynamic graph propagation (DGP) As a linear module, the above operation requires careful normalization of among {pki,t|k ∈ N(i, t)}. In particular, ∑ k∈N(i,t) p k i,t ≤ 1 should be satisfied since the energy of the hidden state signal gets unbounded easily under the recurrence operation. To normalize the gating weights, one may consider a linear self-normalization scheme [13] where the constraint ∑ k∈N(i,t) |pki,t| ≤ 1 is imposed to guarantee the stableness of the propagation4 via dividing each pki,t with ∑ k∈N(i,t) |pki,t|. But the above formulation also leads to certain limitations. For example, the linear form allows pki,t to be both positive and negative, which potentially encourages pki,t to be large towards either positive or negative side rather than being monotonic. In addition, when pki,t ≥ 0, ∑ k∈N(i,t) p k i,t = 1 which means that such formulation considers zero unary input from fi,t. Therefore, another choice is to constrain pki,t with a probabilistic output: pki,t = exp(p̂ki,t)∑ k∈N(i,t) exp(p̂ k i,t) σ(p̂ki,t), (3) where p̂ki,t is defined as: p̂ki,t = W > a [Ai,t; Ak,t−1], (4) 3There is a one-to-one correspondence between (i, t) and pixel index (h,w). But the mapping varies subject to the propagation mode. In Mode R for example, t corresponds to column with w = W − t whereas h = i+1. 4Similar property holds when the same constraint is applied to each dimension of pki,t in Equation (2). and σ(·) is the Sigmoid function. Note that p̂ki,t is the raw gating activation without normalization, and is obtained via a linear projection on the concatenated affinity stream features. The above formation satisfies both ∑ k∈N(i,t) p k i,t ≤ 1 and pki,t ≥ 0, while partially considering the unary input with the Sigmoid term. Yet the framework is a bit complicated with many non-linear terms. To this end, we take one more step to further sparsifying and simplifying the gating response by considering a softmax-with-temperature formulation and taking the limit of T → 0: pki,t = lim T→0 exp(p̂ki,t)/T∑ k∈N(i,t) exp(p̂ k i,t)/T σ(p̂ki,t) (5) Substituting Equation (5) into Equation (2), we have: hi,t = ( 1− σ(p∗i,t) ) fi,t + σ(p∗i,t) h∗i,t, (6) where the n-th dimension of p∗i,t and h ∗ i,t are defined as: k∗ , argmax k∈N(i,t) {p̂ki,t[n]} p∗i,t[n] = p̂k ∗ i,t [n] h ∗ i,t[n] = hk∗,t−1[n]. (7) Equation (6) essentially leads to a dynamic graph propagation (DGP) framework where one performs message passing on a dynamic graph structure by picking neighbor with the highest response. Intuitively, DGP captures a compact structure that has close relation to directed minimum spanning tree, Chu–Liu/Edmonds’ algorithm [59] and the recently pro- posed tree filters [60, 61]. Such structure presents an inductive bias that benefits segmentation by filtering out noisy prediction and following only strong signals. Figure 4 illustrates DGP where we show three example paths on three feature channels. Each channel independently takes different paths depending on its own neighbor affinities. 4.3 Coupling edge prediction with segmentation To deeply couple edge prediction with segmentation, we further incorporate edge signal into the above dynamic graph propagation by proposing a double-gate framework: hi,t = ( 1− σ(p∗i,t) σ(gi,t) ) fi,t + σ(p∗i,t) σ(gi,t) h∗i,t, (8) where gi,t is defined as: gi,t , We ∗E[i− 1 : i+ 1, t− 1 : t+ 1] (9) Note that the edge gating signal is obtained by via a 3 × 3 convolution on the K-channel edge activation map E, which outputs a vector g sharing the same dimension (128) as p∗. This way, edges are able to actively influence segmentation via edge-sensitive gating on message passing. We also hope that the refined segmentation activation after DGP can alternatively serve as a shape regularizer of the edge prediction. To this end, we output a K-channel edge regularization feature from Fm using 1× 1 convolution, followed by another 3× 3 convolution to fuse with E to produce a refined edge map E∗. This is illustrated in Fig. 1 on the bottom right. One shall see that the above coupled design let the two tasks greatly improve each other, and we term the final framework CSEL. 4.4 Training loss The training loss for CSEL is a multi-task combination of cross-entropy (BCE) losses for multi-label edge learning and the cross-entropy (CE) loss for segmentation: L = LSeg + LEdge = LCE(Conv(F ∗)) + λ(LBCE(E∗) + LBCE(E)) (10) where λ is the parameter controlling the weight of segmentation loss. For segmentation, we use a 3× 3 convolution to linearly project the 128-channel activation F ∗ into a K-channel before the loss. 5 Experiments 5.1 Datasets and metric Cityscapes. Cityscapes [62] contains 2975 training images, 500 validation images and 1525 private testing images with 19 pre-defined semantic classes. The dataset has been widely adopted as the standard benchmark for both semantic segmentation and semantic edge detection. Following a number of previous works [12, 37, 38, 21], we comprehensively conduct ablation and quantitative studies for both segmentation and edge detection on the validation set. SBD. The Semantic Boundaries Dataset [35] contains both category-level and instance level semantic segmentation annotations. The dataset contains 11355 images (8498 for training and 2857 for testing) from the trainval set of PASCAL VOC2011 [63], and follows the 20-class VOC definition. PASCAL VOC 2012 [64] is a semantic segmentation dataset with 1464 (training) and 1449 (val) and 1456 (test) images. We use the augmented dataset with 10582 training images, as in [28]. The dataset contains 20 foreground object classes and 1 background class. COCO Panoptic [10] contains the mask annotations for both things and stuff, with a split setting (118K train 2017 images and 5K validation 2017 images) following the detection community. Evaluation metrics. For evaluation, We consider intersection-over-union (IoU) for segmentation, and the maximum F-score (MF) at optimal dataset scale (ODS) for edge detection5. We also consider the boundary F-score proposed in [21] for direct comparison on edge detection task. 5.2 Implementation details Data loading. During training, we unify the training crop size as 1024 × 1024 for Cityscapes, 472 × 472 for SBD and VOC12, and 464 × 464 for COCO Panoptic. All models are trained with 150k iterations on Cityscapes with batch size 8, 30k iterations on SBD and VOC12 with batch size 16, and 220k iterations on COCO Panoptic with batch size 16. We also perform data-augmentation with random mirror, scaling (scale factors in [0.5, 2.0]) and color jittering. Optimization. we apply an SGD optimizer with a weight decay of 5 × 10−4 during training. For baselines and methods that involving CSEL, we additionally apply a second ADAM optimizer to the propagation layers. In our case, we empirically found that ADAM optimizer is more capable of optimizing the propagation layers with better performance. Learning rate and loss. The base learning rates for methods with ResNet-101/ResNet-38 backbones are unified as 3.0 × 10−8/7.0 × 10−8 across Cityscapes, SBD and VOC12. On COCO Panoptic, the base learning rate is unified as 5.0 × 10−8 for all comparing methods. Unless indicated, the segmentation weight λ in Eq (7) is empirically set it to 0.5 to balance LSeg and LEdge. We found our method not sensitive to λ, and setting λ = 0.5 works excellently for all backbones and datasets. Using Mapillary Vistas. On Cityscapes, a number of literature consider large-scale pretraining on Mapillary Vistas (MV) [65], which is shown to considerably benefit the segmentation performance. Unless indicated, our method does not adopt Mapillary Vistas pre-training for fair comparison. 5.3 Experiments on Cityscapes Table 1: Ablation study of Semantic segmentation on Cityscapes validation set. method ResNet-101 ResNet-38 ST 77.9 78.55 MT 78.4 79.43 SPN 80.0 - SPN+Edge 80.4 - DGP 81.3 - CSEL− 80.9 - CSEL 82.8 82.8 CSEL-Ms 83.7 83.4 Ablation study (semantic segmentation). Table 1 shows the ablation studies on semantic segmentation where we consider apple-to-apple compared methods that are trained following the same backbones and training protocols: 1) ST: Single task segmentation network without propagation layer but keeping the 128-channel feature map F . 2) MT: The same as ST but naively multi-tasking segmentation with edge detection. 3) SPN [13]: Implementation of SPN with the 3-way propagation on F , where gating signal is computed using the affinity stream A. 4) SPN+Edge: Coupling SPN with edge learning following exactly the same double gate design. 5) DGP: Baseline which 5we follow [37] by applying exactly the same parameters and settings. Table 2: Main results of semantic segmentation and semantic edge learning on Cityscapes. (a) Main semantic segmentation results on the Cityscapes validation set. Method Backbone MV mIoU PSPNet [26] ResNet-101 78.8 DeepLabV3+ [28] ResNet-101 78.8 CCNet [66] ResNet-101 80.5 GSCNN [21] ResNet-101 74.7 DANet [30] ResNet-101 81.5 RPCNet [42] ResNet-101 82.1 CSEL ResNet-101 83.7 SGPN [57] ResNet-38 80.9 GSCNN† [21] ResNet-38 80.8 VRec-JP (LR) [67] ResNet-38 81.4 Axial-DeepLab [68] AxiaiRes-XL 81.1 CSEL ResNet-38 83.4 (b) Main semantic segmentation results on the Cityscapes test set. Method Backbone MV mIoU PSPNet [26] ResNet-101 78.4 PSANet [69] ResNet-101 80.1 ASPP [27] ResNet-101 80.1 BFP [58] ResNet-101 81.4 DA-Net [30] ResNet-101 81.5 OCR [29] ResNet-101 81.8 CCNet [66] ResNet-101 81.9 RPCNet [42] ResNet-101 81.8 CSEL ResNet-101 82.1 GSCNN ResNet-38 X 82.9 CSEL ResNet-38 X 83.5 (c) Main results of semantic edge detection on the Cityscapes validation set. Results measured by Maximum F-Score (MF) at optimal data scale. Method Backbone MF (IS) MF (Non-IS) CASENet [12] ResNet-101 68.1 68.9 SEAL [37] ResNet-101 69.1 - STEAL [38] ResNet-101 69.7 71.4 RPCNet [42] ResNet-101 - 78.2 CSEL ResNet-101 78.1 78.3 CSEL ResNet-38 - 78.7 contains the proposed DGP layer without the double gate design. 6) CSEL−: Baseline which is the same as CSEL except for removing the edge loss. 7) CSEL: Our full method with single scale inference. 8) CSEL-MS: CSEL with multiscale inference at {0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0}. We also consider both instance-sensitive (IS) and non-instance-sensitive (Non-IS) settings where the edge training/evaluation labels are with/without instance-edges. Discussions. We make sure that the comparisons are apples to apples and fair, by using the same backbones and training recipes (such as learning rate, crop size, batch size, and number of iterations) for all comparing methods.The DGP baseline can be considered as an apples to apples counterpart of SPN, whereas the CSEL− baseline is a further study to understand whether the improvement in CSEL purely comes from the enriched representation with edge features. One could observe several trends from the results: 1) The proposed multi-task network consistently outperforms its single task counterpart. 2) CSEL outperforms both SPN and SPN+Edge with the dynamic graph propagation and edge gating. 3) The dynamic graph design in DGP alone helps it to achieve 81.3% mIoU, outperforming both SPN and SPN+Edge by 1.3% and 0.9%, respectively. 4) The incorporation of edge guidance with the double gate design further leads to another non-trivial 1.5% improvement. Simply adding the edge feature does not improve the segmentation quality. In fact, CSEL− is even slightly lower than DGP where no edge features are involved. This reflects the importance of edge signal as a structural guidance than making the representation more expressive. Comparison to state-of-the-art (semantic segmentation). We also compare CSEL to state-of-the art semantic segmentation models and list the results in Table 2a and 2b. For clarity, we divide to table into two parts with models using the same type of backbone putting together. Among the comparing methods, CSEL obtains the best results using both ResNet-101 and ResNet-38 backbones. Ablation study (edge detection). We additionally conduct experiments on edge detection. We first show an ablation study in Table 3, where ST indicates single-task edge detection network with the proposed edge stream. MT indicates naive multi-task training of both segmentation and edge. MT leads to slight performance degradation on the edge detection task compared to single-task training on edge detection. However, the degradation is marginal compared to the significant improvement of the edge quality introduced by deep coupling of the two tasks in CSEL. Again, this shows the considerable benefit of coupled learning with both segmentation and edge detection. Comparison to state-of-the-art (edge detection). We also compare to state-of-the-art semantic edge detection methods in Table 2c where we present the training/evaluation protocols in two categories (IS and NonIS) following SEAL. Note that we evaluate the results with the edge thinning protocol. One could see that our method achieves the best performance in both settings with significant performance gain from the coupled edge and segmentation learning, as well as the benefit from the dynamic graph propagation module. Besides the evaluation protocol from SEAL, we also follow a separate evaluation protocol from GSCNN [21] in Table 4, where we use the original evaluation code base from GSCNN to evaluate the boundary quality of semantic segmentation. One can observe that at all different thresholds, CSEL with both ResNet-101 and ResNet-38 backbones outperform DeepLabV3+ and GSCNN by significant margins in terms of boundary quality. 5.4 Experiments on SBD We also evaluate CSEL on the SBD dataset and compare with previous state-of-the-art semantic edge detection models. The main results are presented in Table 5. Note that we used the re-annotated SBD test set from [37] to pursue more precise semantic edge evaluation. From the table, one could see CSEL outperforms other state-ofthe-art methods in both IS and Non-IS settings. 5.5 Experiments on PASCAL VOC12 We evaluate CSEL on the PASCAL VOC12 dataset and report the main results of semantic segmentation in Table 6. The performance is evaluated as the mean IoU over the 20 PASCAL VOC classes plus the background class. We compare CSEL with previous competitive methods that are state-of-theart (DeepLabv3+) or considerably related (SPN/DFN). CSEL achieves considerable improvement these methods. 5.6 Experiments on COCO Panoptic We further conduct experiments on the full COCO Panoptic dataset. Several earlier methods have reported results on COCOStuff 10K which is an earlier version of COCO with much less data. However, this does not lead to significant differences with VOC12 in both the size and the diversity of data. We therefore adopt the full COCO Panoptic dataset which is significantly larger and is widely accepted by the detection and instance/panoptic segmentation communities. We hope that our work present solid baselines that inspires subsequent research on this benchmark. Specifically, we convert the instance segmentation annotations to semantic segmentation ones, and compare segmentation methods with or without the proposed CSEL module. All comparing methods adopt the same CASENet (ResNet-101) architecture. Other hyperparameters are kept exactly the same. Models are trained on the train2017 split and tested on val2017 with single scale inference. From the results in Table 7, one could see that adding CSEL leads to consistently improved results over the naïve segmentation baselines. In addition, multi-task learning with both segmentation and edge losses also slightly outperforms the single segmentation. Finally, CSEL slightly outperforms SPN with reduced gain compared to results on Cityscapes. We hypothesize that the reduced gain is partly caused by the noisier mask annotations on COCO Panoptic which makes edge learning more challenging and decreases the edge gate quality. 5.7 Robustness Against Natural Corruptions We hypothesize that CSEL also brings robustness to the segmentation model. We show that this is the case and empirically verify on Cityscapes-C in Table 8. Specifially, we follow the standard corruption package provided by [70] to corrupt the Cityscapes validation images. This expands the Cityscapes validation set with 16 types of algorithmically generated corruptions from 4 major categories: “Noise”, “Blur”, “Weather” and “Digital”. Each corruption type also contains 5 severity levels, leading to 2500 evaluation images for each type alone6. Our result shows that CSEL overall improves the robustness significantly, especially compared to ResNet-38 and GSCNN [21] which share the same backbone. On the other hand, the method does 6We only consider level 1-3 for the “Noise” category following [71]. still show some vulnerability to certain type of corruptions, particularly those belonging to the “Noise” category. However, this trend is aligned with ResNet-38 and GSCNN and we hypothesize that it may be partly related to the specific design of ResNet-38 based on this pattern. 6 Conclusion We proposed a unified framework for coupled segmention and edge learning. Our work revisits the two long-standing and important perceptual grouping problems - semantic segmentation and edge detection. Our method (CSEL) includes a novel end-to-end multi-task network, a recurrent dynamic graph propagation layer, as well as deep coupling of the two tasks on top on top of them. Finally, results show that careful coupling of the tasks leads to significant improvement on both of them. Broader Impact Our method coupled two long-standing important vision problems, semantic segmentation and edge detection, under a unified framework. Besides the various practical benefits from deeply coupling these two tasks, we expect the research to inspire considerable insights, interests and revisits on perceptual grouping, mid-level representations and structured prediction. The result of the research is likely to find diverse scene understanding applications such as autonomous driving and robot navigation. Like many other discriminative recognition models, our method inevitably faces challenges from input data quality and underlying data biases. The model behavior is subject to various factors such as data distributions, domain gaps, label quality and fairness. We encourage researchers to focus on the “in the wild” robustness when the above challenges are present. Acknowledgement We thank the NVIDIA GPU Cloud (NGC) team for the computing support of this work. We also thank the anonymous reviewers and the other NVIDIA colleagues who helped to improve this work with discussions and constructive suggestions. 7 Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] Please see result analysis on Cityscapes-C. (c) Did you discuss any potential negative societal impacts of your work? [Yes] Please see Broader Impact. (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [N/A] (b) Did you include complete proofs of all theoretical results? [N/A] 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experi- mental results (either in the supplemental material or as a URL)? [No] Going through legal approval process at this moment. Will release source code upon approval. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Please refer to implementation details in both the main paper and the supplementary. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [No] We observed stable mIoU results for semantic segmentation models. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] Please refer to the supplementary. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] The data used in our work is open source and can be used for adademic research. (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] The data used in our work is open source and can be used for adademic research. (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] The data does not contain personally identifiable information or offensive content. 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the focus of the paper regarding semantic segmentation? 2. What are the strengths of the proposed method, particularly its intuitive approach and thorough testing? 3. What are the weaknesses of the paper, such as the lack of qualitative examples and analysis of the module's effectiveness in dealing with different types of corruptions? 4. How does the reviewer assess the significance of the paper's contributions, especially in relation to improving performance on other datasets and tasks? 5. Do you have any suggestions for further evaluations or applications of the proposed method?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a dynamic (i.e. recurrent) graph propagation module to improve the results of a feedforward semantic segmentation algorithm on the Cityscapes dataset. In addition to outputting the typical feature map where each pixel is assigned to one of 19 categories (i.e. the semantic segmentation map), the proposed network produces supervised predictions of semantic edges (the boundaries between classes) and a set of local pixel-pixel affinities. These two novel outputs are used as gates in a recurrent message-passing algorithm, which refines the semantic segmentation map (as well as the edge map.) The authors show that the refined map gives improved semantic segmentation scores over the unadorned baseline and a number of other state-of-the-art algorithms. Finally, they find that the dynamic graph propagation module gives substantial improvements on "corrupted" versions of the Cityscape dataset (at least for some types of corruption.) Review The authors' approach is intuitive and clearly explained, as is the logic for why information about edges should be useful for recurrently refining a feedforward feature map. They thoroughly test their model and ablations on Cityscapes, and I buy that adding their module yields an improvement. The performance improvements on some types of image corruptions are especially interesting. I wish the authors had included a bunch of qualitative examples and tried to analyze why their module is helpful in dealing with some types of corruption but not others. Indeed, the results in this section are strong enough that it's arguably their most important contribution, since the Cityscapes semantic segmentation task is so close to performance ceiling anyway (and adding a few percentage points to mIoU may not mean detecting new objects that were being missed before, etc.) Other than this, my main suggestion is that the authors evaluate their model on at least one, and ideally several other datasets and tasks. Cityscapes semantic segmentation is peculiar in that there are not too many categories and the images are pretty homogeneous. If their module significantly improved results on a more diverse dataset (e.g. COCO) I'd be more inclined to think that they had really identified a very useful novel computation for the segmentation task. Likewise, it seems that their approach could be adopted for performing or improving instance segmentation (rather than semantic segmentation), which is a task in much greater need of algorithmic improvement than the latter. If the authors could offer more than this one example (Cityscapes semantic segmentation) of how Dynamic Graph Propagation with joint edge learning improves results, it would much more strongly support their initial motivation for introducing this architecture. ---- Updated ----
NIPS
Title Coupled Segmentation and Edge Learning via Dynamic Graph Propagation Abstract Image segmentation and edge detection are both central problems in perceptual grouping. It is therefore interesting to study how these two tasks can be coupled to benefit each other. Indeed, segmentation can be easily transformed into contour edges to guide edge learning. However, the converse is nontrivial since general edges may not always form closed contours. In this paper, we propose a principled end-to-end framework for coupled edge and segmentation learning, where edges are leveraged as pairwise similarity cues to guide segmentation. At the core of our framework is a recurrent module termed as dynamic graph propagation (DGP) layer that performs message passing on dynamically constructed graphs. The layer uses learned gating to dynamically select neighbors for message passing using max-pooling. The output from message passing is further gated with an edge signal to refine segmentation. Experiments demonstrate that the proposed framework is able to let both tasks mutually improve each other. On Cityscapes validation, our best model achieves 83.7% mIoU in semantic segmentation and 78.7% maximum F-score in semantic edge detection. Our method also leads to improved zero-shot robustness on Cityscapes with natural corruptions (Cityscapes-C). 1 Introduction Image segmentation and edge detection have been widely studied as important perception problems. The two problems are closely related. In fact, segmentation subsumes edge detection since any segmentation contour makes a closed boundary of a region. The converse is however not true since general edges do not always form closed contours. Nevertheless, edge detection can serve as an auxiliary task to improve segmentation performance since edges provide important pairwise similarity cues for segmentation. Early works tend to focus on the grouping and contrast of pixels from a perceptual similarity perspective. Martin et al. [1] proposed the Berkeley Segmentation Dataset, a popular benchmark for segmentation and boundary detection that inspired many impactful works in perceptual grouping [2–5]. The recent surge of deep learning renders powerful representations with learned features using convolutional neural networks (CNNs) [6]. This has led to great advances in both areas [7–12], but the two tasks are often considered separately. In light of the status quo, we consider coupled edge and segmentation learning. Our goal is twofold: (1) Multi-task learning - being able to produce high quality edge detection and segmentation. (2) Mutual improvement - the two tasks can help each other with non-trivial performance gains. Designing a principled framework is however nontrivial. The key question is how sparse edge signals can be effectively transformed into dense region-level ones to interact with segmentation. To this end, we propose a learnable recurrent message passing layer where semantic edges are considered ∗Equal contribution. Correspondence to Zhiding Yu <[email protected]>. †Work partially done during an internship at NVIDIA. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). as explicitly learned gating signals to refine segmentation. An overview of our framework is shown in Fig. 1. Specifically, the dynamic message passing layer uses affinity gates to select the neighbor for message passing using max-pooling. It conducts message passing sweeps in each of the four directions: left to right, right to left, top to bottom and bottom to top. The message passing is jointly gated by both the affinity and edge gates, therefore allowing edge cues to naturally influence long-range dense predictions. As such, our framework presents a context module that is clean, compact yet powerful. Our technical contributions can be summarized as follows: • We formulate recurrent message passing as dynamic graph propagation (DGP). We show that such a formulation simplifies the required normalization in propagation networks [13]. It also dynamically finds graph structures that encodes pixel affinities and improves dense prediction. • We propose a double-gate design where message passing is jointly gated by both the affinity and edge gates to refine the segmentation feature map. We show that this design together with the dynamic 1-way connection in DGP better couples segmentation and edge learning. • We obtain state-of-the-art results on joint semantic segmentation and edge detection. We also show that DGP leads to strong zero-shot robustness to natural corruptions with significant improvement over prior methods on Corrupted Cityscapes (Cityscapes-C). Multitasking segmentation and edge learning is desirable for several reasons: 1) There are many downstream applications where both are needed, such as occlusion reasoning [14], localization [15], proposal generation [16–19] and conditional generation [20]. 2) There are many challenging cases where segmentation quality is poor but edge quality is far more superior, e.g. segmentation tends to be inferior near object boundaries since they are often optimized for IoU rather than precision [21]. In these cases, edge learning can potentially capture details missed by segmentation. 3) Implicitly improved model generalization as a result of the coupled learning [22]. 2 Related Work Semantic Segmentation. There is a rich set of prior work in semantic segmentation. Long et al., [7] proposed fully convolutional end-to-end training of semantic segmentation and pointed out its connection to recognition [6]. Chen et al. [8] introduced atrous convolution and atrous spatial pyramid pooling (ASPP) to capture multi-scale image contexts. Another contemporary work is the wide ResNet-38 which explores a relatively shallower but wider backbone [23]. It has also been shown that context plays an important role in segmentation, including context encoding [24, 25], multi-scale context [26–28] as well as relational context [29, 30]. More recently, there has been a surge of interests in segmentation with Transformers [31–34] Boundary/Edge Detection. Similar to segmentation, boundary/edge detection have been widely studied as perceptual grouping problems in early literature [1–3]. Recent methods tend to resort to CNNs. For example, Bertasius et al. proposed a multi-scale deep network architecture for top-down contour detection, where as Xie et al. [11] further introduced holistically-nested edge detection (HED) for end-to-end edge learning. Besides detecting binary edges, Hariharan et al. [35] proposed the Semantic Boundaries Dataset (SBD) which has become a popular benchmark for semantic edge detection. Compared to binary edge detection, semantic edge detection involves the semantic classification of edge pixels in addition to localization which presents additional challenges to existing frameworks. A series of works including HFL [36], CASENet [12], SEAL [37] and STEAL [38] have followed up [35] and pushed the boundaries. Multi-task Segmentation and Edge Learning. Multi-task segmentation and edge learning remains under-studied but is not entirely new. Edges have been pairwise similarity cues to improve segmentation [3, 39, 18] and superpixels [4]. It was shown that edges can be transformed into dense regions through the Laplacian Eigenmaps of boundary transformed affinity [36, 5, 40]. Despite being robust, these methods are generally slow and not end-to-end trainable. For more recent CNN based models, end-to-end multi-task learning on a shared backbone is a natural choice [41]. In addition, proper regularization between segmentation and edge has been shown to improve the performance of both tasks. For example, Takikawa et al. [21] use softmax with temperature to impose consistency between segmentation and semantic edges. Zhen et al. [42] correlate semantic segmentation and edge detection tasks with a consistency loss. Our work goes beyond multi-tasking and let the two task more deeply coupled through dynamic graph propagation. Structured Dense Prediction. Segmentation is a dense prediction task where structured information can be useful. Structured prediction models such as Markov random fields (MRFs) [43], conditional random fields (CRFs) [44–46] and energy minimization [47, 48] have widely proved helpful in segmentation problems by imposing contrast-sensitive smoothness. In addition, structured inference can also be unfolded as network layers for end-to-end training [49, 50], therefore combining the advantages from both ends. Recently, there is an increasing trend to directly model the message passing process itself using multi-dimensional RNN [51–53], graph RNN [54, 55] and spatial propagation network (SPN) [13, 56, 57]. A common advantage is that these methods render more context-aware prediction with larger receptive fields while preserving local details similar to CRFs. Their ability to train end-to-end allows more powerful representation of inter-pixel similarities and thus better dense prediction quality. Our work can be broadly categorized into this category. It is worth mentioning that the contrast sensitive smoothness term in CRF and propagation networks [52, 13] is an implicit modeling of edge signals by learning to relax smoothness constraints at high contrast areas. There have also been variants of SPN [58] that take binary edge as gate regularization. However, none of these works have explicitly addressed multi-tasking learning with category-aware edges whereas the proposed DGP framework presents a novel and effective solution to this problem. 3 Multi-task network As a first step towards coupled segmentation and edge learning, we introduce two novel multi-task networks that are able to perform multi-task learning for both segmentation and edge detection. An overview of their architectures is shown in Fig. 2. The rest of the paper follows these notations and covers more details for each module. Backbone. A CNN giving a semantic feature map (denoted as F ) encoding the segmentation information. There could be multiple choices of the backbone networks, ranging from the popular architectures of DeepLabv2 [8], DeepLabv3 [27] to latest state-of-the-arts such as DeepLabv3+ [28] and Dual-Attention Network [30]. Adopting powerful backbones will surely benefit the system-level performance, but the major purpose of this work does not completely lie in achieving state-of-the-arts. We are more interested in showing the effectiveness of a proposed framework on standard backbones. To this end, we consider both CASENet [12] and ResNet-38 [23] as the standard backbones for benchmarking. Both of them comprehensively covers the feature map resolutions of 1, 1/2, 1/4 and 1/8, with the last resolution being a rule of thumb adopted by many segmentation networks. For both networks, we adopt atrous spatial pyramid pooling layers with dilation rate [6, 12, 18, 24, 30, 36] to capture context information. We set the output channels to be 128 to produce a semantic feature map, fowllowed by either direct segmentation prediction (baselines) or the proposed DGP layer. Affinity stream. Convolution layers that aggregate the cross-layer information of the backbone and produces an affinity feature map (denoted as A) to encode the affinity of pairwise neighboring pixels. This stream is only used with the presence of recurrent message passing layer (including its related baselines). The stream aims to model the simialrity of pair-wise neighboring pixels and serve as a major input to the gating signal in recurrent message passing layer. Compared to edge stream, the affinity stream seeks to capture coarser-level inter-pixel affinity with slightly deeper convolutions and higher dimensions. The resulting feature map A is a 256 dimensional tensor concatenated by ASPP and the side convolutional feature maps from the affinity stream. Edge stream. Dense skip-layer features with abundant details that are combined with edge classification layer through shared concatenation [12] to produce a semantic edge map (denoted as E). We design an improved edge stream over CASENet [12] to better leverage detail information at multiple scales/levels with dense side connections. Unlike CASENet where the bottom blocks only provide 1-channel side features, we densely obtain side features from every sub-block residual modules. In CASENet, this gives side features with a total number of 31 dimensions. Similar rules apply to ResNet-38, where the side feature has a total number of 17 dimensions We also found that applying sigmoid after side features with resolution 1/8 greatly benefits edge training from two aspects: (1) It helps to stabilize and prevent gradient explosions from dense skip connections. (2) It removes the typical aliasing edge issue caused by upsampling/deconvolution, and produces elegant smooth edges. Even though we do not explicitly apply techniques such as edge alignment [37] in this work, the proposed backbone is able to produce high quality edge predictions under noisy labels. We also notice that it is better to remove sigmoid for side features with higher resolutions. Edge prediction is multi-tasked alongside with ASPP using 1× 1 convolution on top of Res Block5. This returns the K classes coarse edge predictions. Similar to CASENet, we upsample all side features together with the K-dimensional edge predictions to full image resolution and apply shared concatenation, where side features are repetitively shared and concatenated with each class for K times, followed by a K-way 3× 3 group convolution to produce semantic edges. Dynamic graph propagation. Learnable recurrent message passing layer that takes the above three branches as input to produce a refined semantic feature map (denoted as F ∗). More details regarding this module will be introduced in the next section. 4 Coupled segmentation and edge learning Although the multi-task network is able to jointly produce segmentation and edge prediction, we are interested in letting these two tasks better coupled to mutually improve each other. To this end, we look into recent spatial propagation networks where edges can be transformed into gating signals that produce long range influence to segmentation. We first give the notations and definitions: 4.1 Notations and Settings We start with a two-dimensional recurrent network architecture with a linear propagation module passing messages (memories) spatially over a feature map. We define a 4-way message passing: 1. Left→ Right (Mode L), Right→Left (Mode R), Top→Bottom (Mode T) and Bottom→Top (Mode B). Each way of message passing will separately generate a hidden state map H which can be approximately viewed as a refined (smoothed) version of F . Finally, we take an element-wise max operation to ensemble them where the model automatically selects the optimal direction with highest neural activation for each pixel: F ∗ ←H , max (HL, HR, HT , HB) (1) To perform message passing in each way, one often needs to pre-define a graph that encodes the message passing paths. For tractability and and computation issues, such graph is often sparsely defined with locally connections between immediate neighboring pixels. We follow spatial propagation network (SPN) [13] by defining a three-way local connection, as illustrated on the hand side of Fig. 3. The advantage of such design is obvious: Taking Mode R propagation as an example, one just needs to initiate the message passing from the right most column, and recurrently pass the message from right to left column by column. In this case, the hidden state of each pixel is directly influenced by the three immediate neighboring pixels on the right column. Details of the propagation under Mode R is illustrated on the left hand side of Fig. 3. Let F ∈ RH×W×C be the feature map input to the propagation module, H ∈ RH×W×C the propagation latent space on top of F . In addition, let hi,t and fi,t denote the hidden state and feature of the i th pixel located at recurrence t3 on H and F , respectively. We denote {pki,t|k ∈ N(i, t)} the set of learnable propagation gating weights for hidden state h(i, t) where N(i, t) is the set of neighbors of pixel (i, t). In this case, the spatial propagation in each mode is defined as: hi,t = ( 1− ∑ k∈N(i,t) pki,t ) fi,t + ∑ k∈N(i,t) pki,t hk,t−1 (2) where is element-wise product, and hk,t−1 is the hidden state of pixel (i, t)’s neighbors from the previous recurrence. {pki,t|k ∈ N(i, t)} is expandable to an affinity matrix, revealing the global and dense message passing among all the pixels of F . 4.2 Dynamic graph propagation (DGP) As a linear module, the above operation requires careful normalization of among {pki,t|k ∈ N(i, t)}. In particular, ∑ k∈N(i,t) p k i,t ≤ 1 should be satisfied since the energy of the hidden state signal gets unbounded easily under the recurrence operation. To normalize the gating weights, one may consider a linear self-normalization scheme [13] where the constraint ∑ k∈N(i,t) |pki,t| ≤ 1 is imposed to guarantee the stableness of the propagation4 via dividing each pki,t with ∑ k∈N(i,t) |pki,t|. But the above formulation also leads to certain limitations. For example, the linear form allows pki,t to be both positive and negative, which potentially encourages pki,t to be large towards either positive or negative side rather than being monotonic. In addition, when pki,t ≥ 0, ∑ k∈N(i,t) p k i,t = 1 which means that such formulation considers zero unary input from fi,t. Therefore, another choice is to constrain pki,t with a probabilistic output: pki,t = exp(p̂ki,t)∑ k∈N(i,t) exp(p̂ k i,t) σ(p̂ki,t), (3) where p̂ki,t is defined as: p̂ki,t = W > a [Ai,t; Ak,t−1], (4) 3There is a one-to-one correspondence between (i, t) and pixel index (h,w). But the mapping varies subject to the propagation mode. In Mode R for example, t corresponds to column with w = W − t whereas h = i+1. 4Similar property holds when the same constraint is applied to each dimension of pki,t in Equation (2). and σ(·) is the Sigmoid function. Note that p̂ki,t is the raw gating activation without normalization, and is obtained via a linear projection on the concatenated affinity stream features. The above formation satisfies both ∑ k∈N(i,t) p k i,t ≤ 1 and pki,t ≥ 0, while partially considering the unary input with the Sigmoid term. Yet the framework is a bit complicated with many non-linear terms. To this end, we take one more step to further sparsifying and simplifying the gating response by considering a softmax-with-temperature formulation and taking the limit of T → 0: pki,t = lim T→0 exp(p̂ki,t)/T∑ k∈N(i,t) exp(p̂ k i,t)/T σ(p̂ki,t) (5) Substituting Equation (5) into Equation (2), we have: hi,t = ( 1− σ(p∗i,t) ) fi,t + σ(p∗i,t) h∗i,t, (6) where the n-th dimension of p∗i,t and h ∗ i,t are defined as: k∗ , argmax k∈N(i,t) {p̂ki,t[n]} p∗i,t[n] = p̂k ∗ i,t [n] h ∗ i,t[n] = hk∗,t−1[n]. (7) Equation (6) essentially leads to a dynamic graph propagation (DGP) framework where one performs message passing on a dynamic graph structure by picking neighbor with the highest response. Intuitively, DGP captures a compact structure that has close relation to directed minimum spanning tree, Chu–Liu/Edmonds’ algorithm [59] and the recently pro- posed tree filters [60, 61]. Such structure presents an inductive bias that benefits segmentation by filtering out noisy prediction and following only strong signals. Figure 4 illustrates DGP where we show three example paths on three feature channels. Each channel independently takes different paths depending on its own neighbor affinities. 4.3 Coupling edge prediction with segmentation To deeply couple edge prediction with segmentation, we further incorporate edge signal into the above dynamic graph propagation by proposing a double-gate framework: hi,t = ( 1− σ(p∗i,t) σ(gi,t) ) fi,t + σ(p∗i,t) σ(gi,t) h∗i,t, (8) where gi,t is defined as: gi,t , We ∗E[i− 1 : i+ 1, t− 1 : t+ 1] (9) Note that the edge gating signal is obtained by via a 3 × 3 convolution on the K-channel edge activation map E, which outputs a vector g sharing the same dimension (128) as p∗. This way, edges are able to actively influence segmentation via edge-sensitive gating on message passing. We also hope that the refined segmentation activation after DGP can alternatively serve as a shape regularizer of the edge prediction. To this end, we output a K-channel edge regularization feature from Fm using 1× 1 convolution, followed by another 3× 3 convolution to fuse with E to produce a refined edge map E∗. This is illustrated in Fig. 1 on the bottom right. One shall see that the above coupled design let the two tasks greatly improve each other, and we term the final framework CSEL. 4.4 Training loss The training loss for CSEL is a multi-task combination of cross-entropy (BCE) losses for multi-label edge learning and the cross-entropy (CE) loss for segmentation: L = LSeg + LEdge = LCE(Conv(F ∗)) + λ(LBCE(E∗) + LBCE(E)) (10) where λ is the parameter controlling the weight of segmentation loss. For segmentation, we use a 3× 3 convolution to linearly project the 128-channel activation F ∗ into a K-channel before the loss. 5 Experiments 5.1 Datasets and metric Cityscapes. Cityscapes [62] contains 2975 training images, 500 validation images and 1525 private testing images with 19 pre-defined semantic classes. The dataset has been widely adopted as the standard benchmark for both semantic segmentation and semantic edge detection. Following a number of previous works [12, 37, 38, 21], we comprehensively conduct ablation and quantitative studies for both segmentation and edge detection on the validation set. SBD. The Semantic Boundaries Dataset [35] contains both category-level and instance level semantic segmentation annotations. The dataset contains 11355 images (8498 for training and 2857 for testing) from the trainval set of PASCAL VOC2011 [63], and follows the 20-class VOC definition. PASCAL VOC 2012 [64] is a semantic segmentation dataset with 1464 (training) and 1449 (val) and 1456 (test) images. We use the augmented dataset with 10582 training images, as in [28]. The dataset contains 20 foreground object classes and 1 background class. COCO Panoptic [10] contains the mask annotations for both things and stuff, with a split setting (118K train 2017 images and 5K validation 2017 images) following the detection community. Evaluation metrics. For evaluation, We consider intersection-over-union (IoU) for segmentation, and the maximum F-score (MF) at optimal dataset scale (ODS) for edge detection5. We also consider the boundary F-score proposed in [21] for direct comparison on edge detection task. 5.2 Implementation details Data loading. During training, we unify the training crop size as 1024 × 1024 for Cityscapes, 472 × 472 for SBD and VOC12, and 464 × 464 for COCO Panoptic. All models are trained with 150k iterations on Cityscapes with batch size 8, 30k iterations on SBD and VOC12 with batch size 16, and 220k iterations on COCO Panoptic with batch size 16. We also perform data-augmentation with random mirror, scaling (scale factors in [0.5, 2.0]) and color jittering. Optimization. we apply an SGD optimizer with a weight decay of 5 × 10−4 during training. For baselines and methods that involving CSEL, we additionally apply a second ADAM optimizer to the propagation layers. In our case, we empirically found that ADAM optimizer is more capable of optimizing the propagation layers with better performance. Learning rate and loss. The base learning rates for methods with ResNet-101/ResNet-38 backbones are unified as 3.0 × 10−8/7.0 × 10−8 across Cityscapes, SBD and VOC12. On COCO Panoptic, the base learning rate is unified as 5.0 × 10−8 for all comparing methods. Unless indicated, the segmentation weight λ in Eq (7) is empirically set it to 0.5 to balance LSeg and LEdge. We found our method not sensitive to λ, and setting λ = 0.5 works excellently for all backbones and datasets. Using Mapillary Vistas. On Cityscapes, a number of literature consider large-scale pretraining on Mapillary Vistas (MV) [65], which is shown to considerably benefit the segmentation performance. Unless indicated, our method does not adopt Mapillary Vistas pre-training for fair comparison. 5.3 Experiments on Cityscapes Table 1: Ablation study of Semantic segmentation on Cityscapes validation set. method ResNet-101 ResNet-38 ST 77.9 78.55 MT 78.4 79.43 SPN 80.0 - SPN+Edge 80.4 - DGP 81.3 - CSEL− 80.9 - CSEL 82.8 82.8 CSEL-Ms 83.7 83.4 Ablation study (semantic segmentation). Table 1 shows the ablation studies on semantic segmentation where we consider apple-to-apple compared methods that are trained following the same backbones and training protocols: 1) ST: Single task segmentation network without propagation layer but keeping the 128-channel feature map F . 2) MT: The same as ST but naively multi-tasking segmentation with edge detection. 3) SPN [13]: Implementation of SPN with the 3-way propagation on F , where gating signal is computed using the affinity stream A. 4) SPN+Edge: Coupling SPN with edge learning following exactly the same double gate design. 5) DGP: Baseline which 5we follow [37] by applying exactly the same parameters and settings. Table 2: Main results of semantic segmentation and semantic edge learning on Cityscapes. (a) Main semantic segmentation results on the Cityscapes validation set. Method Backbone MV mIoU PSPNet [26] ResNet-101 78.8 DeepLabV3+ [28] ResNet-101 78.8 CCNet [66] ResNet-101 80.5 GSCNN [21] ResNet-101 74.7 DANet [30] ResNet-101 81.5 RPCNet [42] ResNet-101 82.1 CSEL ResNet-101 83.7 SGPN [57] ResNet-38 80.9 GSCNN† [21] ResNet-38 80.8 VRec-JP (LR) [67] ResNet-38 81.4 Axial-DeepLab [68] AxiaiRes-XL 81.1 CSEL ResNet-38 83.4 (b) Main semantic segmentation results on the Cityscapes test set. Method Backbone MV mIoU PSPNet [26] ResNet-101 78.4 PSANet [69] ResNet-101 80.1 ASPP [27] ResNet-101 80.1 BFP [58] ResNet-101 81.4 DA-Net [30] ResNet-101 81.5 OCR [29] ResNet-101 81.8 CCNet [66] ResNet-101 81.9 RPCNet [42] ResNet-101 81.8 CSEL ResNet-101 82.1 GSCNN ResNet-38 X 82.9 CSEL ResNet-38 X 83.5 (c) Main results of semantic edge detection on the Cityscapes validation set. Results measured by Maximum F-Score (MF) at optimal data scale. Method Backbone MF (IS) MF (Non-IS) CASENet [12] ResNet-101 68.1 68.9 SEAL [37] ResNet-101 69.1 - STEAL [38] ResNet-101 69.7 71.4 RPCNet [42] ResNet-101 - 78.2 CSEL ResNet-101 78.1 78.3 CSEL ResNet-38 - 78.7 contains the proposed DGP layer without the double gate design. 6) CSEL−: Baseline which is the same as CSEL except for removing the edge loss. 7) CSEL: Our full method with single scale inference. 8) CSEL-MS: CSEL with multiscale inference at {0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0}. We also consider both instance-sensitive (IS) and non-instance-sensitive (Non-IS) settings where the edge training/evaluation labels are with/without instance-edges. Discussions. We make sure that the comparisons are apples to apples and fair, by using the same backbones and training recipes (such as learning rate, crop size, batch size, and number of iterations) for all comparing methods.The DGP baseline can be considered as an apples to apples counterpart of SPN, whereas the CSEL− baseline is a further study to understand whether the improvement in CSEL purely comes from the enriched representation with edge features. One could observe several trends from the results: 1) The proposed multi-task network consistently outperforms its single task counterpart. 2) CSEL outperforms both SPN and SPN+Edge with the dynamic graph propagation and edge gating. 3) The dynamic graph design in DGP alone helps it to achieve 81.3% mIoU, outperforming both SPN and SPN+Edge by 1.3% and 0.9%, respectively. 4) The incorporation of edge guidance with the double gate design further leads to another non-trivial 1.5% improvement. Simply adding the edge feature does not improve the segmentation quality. In fact, CSEL− is even slightly lower than DGP where no edge features are involved. This reflects the importance of edge signal as a structural guidance than making the representation more expressive. Comparison to state-of-the-art (semantic segmentation). We also compare CSEL to state-of-the art semantic segmentation models and list the results in Table 2a and 2b. For clarity, we divide to table into two parts with models using the same type of backbone putting together. Among the comparing methods, CSEL obtains the best results using both ResNet-101 and ResNet-38 backbones. Ablation study (edge detection). We additionally conduct experiments on edge detection. We first show an ablation study in Table 3, where ST indicates single-task edge detection network with the proposed edge stream. MT indicates naive multi-task training of both segmentation and edge. MT leads to slight performance degradation on the edge detection task compared to single-task training on edge detection. However, the degradation is marginal compared to the significant improvement of the edge quality introduced by deep coupling of the two tasks in CSEL. Again, this shows the considerable benefit of coupled learning with both segmentation and edge detection. Comparison to state-of-the-art (edge detection). We also compare to state-of-the-art semantic edge detection methods in Table 2c where we present the training/evaluation protocols in two categories (IS and NonIS) following SEAL. Note that we evaluate the results with the edge thinning protocol. One could see that our method achieves the best performance in both settings with significant performance gain from the coupled edge and segmentation learning, as well as the benefit from the dynamic graph propagation module. Besides the evaluation protocol from SEAL, we also follow a separate evaluation protocol from GSCNN [21] in Table 4, where we use the original evaluation code base from GSCNN to evaluate the boundary quality of semantic segmentation. One can observe that at all different thresholds, CSEL with both ResNet-101 and ResNet-38 backbones outperform DeepLabV3+ and GSCNN by significant margins in terms of boundary quality. 5.4 Experiments on SBD We also evaluate CSEL on the SBD dataset and compare with previous state-of-the-art semantic edge detection models. The main results are presented in Table 5. Note that we used the re-annotated SBD test set from [37] to pursue more precise semantic edge evaluation. From the table, one could see CSEL outperforms other state-ofthe-art methods in both IS and Non-IS settings. 5.5 Experiments on PASCAL VOC12 We evaluate CSEL on the PASCAL VOC12 dataset and report the main results of semantic segmentation in Table 6. The performance is evaluated as the mean IoU over the 20 PASCAL VOC classes plus the background class. We compare CSEL with previous competitive methods that are state-of-theart (DeepLabv3+) or considerably related (SPN/DFN). CSEL achieves considerable improvement these methods. 5.6 Experiments on COCO Panoptic We further conduct experiments on the full COCO Panoptic dataset. Several earlier methods have reported results on COCOStuff 10K which is an earlier version of COCO with much less data. However, this does not lead to significant differences with VOC12 in both the size and the diversity of data. We therefore adopt the full COCO Panoptic dataset which is significantly larger and is widely accepted by the detection and instance/panoptic segmentation communities. We hope that our work present solid baselines that inspires subsequent research on this benchmark. Specifically, we convert the instance segmentation annotations to semantic segmentation ones, and compare segmentation methods with or without the proposed CSEL module. All comparing methods adopt the same CASENet (ResNet-101) architecture. Other hyperparameters are kept exactly the same. Models are trained on the train2017 split and tested on val2017 with single scale inference. From the results in Table 7, one could see that adding CSEL leads to consistently improved results over the naïve segmentation baselines. In addition, multi-task learning with both segmentation and edge losses also slightly outperforms the single segmentation. Finally, CSEL slightly outperforms SPN with reduced gain compared to results on Cityscapes. We hypothesize that the reduced gain is partly caused by the noisier mask annotations on COCO Panoptic which makes edge learning more challenging and decreases the edge gate quality. 5.7 Robustness Against Natural Corruptions We hypothesize that CSEL also brings robustness to the segmentation model. We show that this is the case and empirically verify on Cityscapes-C in Table 8. Specifially, we follow the standard corruption package provided by [70] to corrupt the Cityscapes validation images. This expands the Cityscapes validation set with 16 types of algorithmically generated corruptions from 4 major categories: “Noise”, “Blur”, “Weather” and “Digital”. Each corruption type also contains 5 severity levels, leading to 2500 evaluation images for each type alone6. Our result shows that CSEL overall improves the robustness significantly, especially compared to ResNet-38 and GSCNN [21] which share the same backbone. On the other hand, the method does 6We only consider level 1-3 for the “Noise” category following [71]. still show some vulnerability to certain type of corruptions, particularly those belonging to the “Noise” category. However, this trend is aligned with ResNet-38 and GSCNN and we hypothesize that it may be partly related to the specific design of ResNet-38 based on this pattern. 6 Conclusion We proposed a unified framework for coupled segmention and edge learning. Our work revisits the two long-standing and important perceptual grouping problems - semantic segmentation and edge detection. Our method (CSEL) includes a novel end-to-end multi-task network, a recurrent dynamic graph propagation layer, as well as deep coupling of the two tasks on top on top of them. Finally, results show that careful coupling of the tasks leads to significant improvement on both of them. Broader Impact Our method coupled two long-standing important vision problems, semantic segmentation and edge detection, under a unified framework. Besides the various practical benefits from deeply coupling these two tasks, we expect the research to inspire considerable insights, interests and revisits on perceptual grouping, mid-level representations and structured prediction. The result of the research is likely to find diverse scene understanding applications such as autonomous driving and robot navigation. Like many other discriminative recognition models, our method inevitably faces challenges from input data quality and underlying data biases. The model behavior is subject to various factors such as data distributions, domain gaps, label quality and fairness. We encourage researchers to focus on the “in the wild” robustness when the above challenges are present. Acknowledgement We thank the NVIDIA GPU Cloud (NGC) team for the computing support of this work. We also thank the anonymous reviewers and the other NVIDIA colleagues who helped to improve this work with discussions and constructive suggestions. 7 Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] Please see result analysis on Cityscapes-C. (c) Did you discuss any potential negative societal impacts of your work? [Yes] Please see Broader Impact. (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [N/A] (b) Did you include complete proofs of all theoretical results? [N/A] 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experi- mental results (either in the supplemental material or as a URL)? [No] Going through legal approval process at this moment. Will release source code upon approval. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Please refer to implementation details in both the main paper and the supplementary. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [No] We observed stable mIoU results for semantic segmentation models. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] Please refer to the supplementary. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] The data used in our work is open source and can be used for adademic research. (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] The data used in our work is open source and can be used for adademic research. (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] The data does not contain personally identifiable information or offensive content. 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the main contribution of the paper in edge detection and segmentation? 2. What are the strengths of the proposed approach, particularly in its performance on various benchmarks? 3. Do you have any concerns or questions regarding the paper's methodology or results? 4. How does the reviewer assess the efficiency and computational cost of the proposed DGP layer? 5. Is there any confusion regarding the training process or the reporting of results in the paper?
Summary Of The Paper Review
Summary Of The Paper This paper proposed a unified framework for detecting edges and segmenting objects. They conduct experiments on Cityscapes, SBD and Pascal VOC. The edge detection results are competitive to previous SOTA methods. Review This paper addresses the problem of joint predicting the segmentation results and the edge. By the proposed DGP layer, they achieve good results among several benchmarks. They also show robustness by experimenting on Cityscapes-C. There are some minor concerns, which need more clarification: Do the PASCAL VOC and SBD train together? Are the method in Table 5 and Table 6 also trained under multi-task settings? I am wondering about the efficiency of the proposed DGP. It will be better if the computational cost and inference speed could be compared and discussed. Will edge detection help the semantic segmentation? It will be better if the single edge detection and single segmentation results could be reported in Table 2. There are some typos, for example, Line 78...
NIPS
Title Coupled Segmentation and Edge Learning via Dynamic Graph Propagation Abstract Image segmentation and edge detection are both central problems in perceptual grouping. It is therefore interesting to study how these two tasks can be coupled to benefit each other. Indeed, segmentation can be easily transformed into contour edges to guide edge learning. However, the converse is nontrivial since general edges may not always form closed contours. In this paper, we propose a principled end-to-end framework for coupled edge and segmentation learning, where edges are leveraged as pairwise similarity cues to guide segmentation. At the core of our framework is a recurrent module termed as dynamic graph propagation (DGP) layer that performs message passing on dynamically constructed graphs. The layer uses learned gating to dynamically select neighbors for message passing using max-pooling. The output from message passing is further gated with an edge signal to refine segmentation. Experiments demonstrate that the proposed framework is able to let both tasks mutually improve each other. On Cityscapes validation, our best model achieves 83.7% mIoU in semantic segmentation and 78.7% maximum F-score in semantic edge detection. Our method also leads to improved zero-shot robustness on Cityscapes with natural corruptions (Cityscapes-C). 1 Introduction Image segmentation and edge detection have been widely studied as important perception problems. The two problems are closely related. In fact, segmentation subsumes edge detection since any segmentation contour makes a closed boundary of a region. The converse is however not true since general edges do not always form closed contours. Nevertheless, edge detection can serve as an auxiliary task to improve segmentation performance since edges provide important pairwise similarity cues for segmentation. Early works tend to focus on the grouping and contrast of pixels from a perceptual similarity perspective. Martin et al. [1] proposed the Berkeley Segmentation Dataset, a popular benchmark for segmentation and boundary detection that inspired many impactful works in perceptual grouping [2–5]. The recent surge of deep learning renders powerful representations with learned features using convolutional neural networks (CNNs) [6]. This has led to great advances in both areas [7–12], but the two tasks are often considered separately. In light of the status quo, we consider coupled edge and segmentation learning. Our goal is twofold: (1) Multi-task learning - being able to produce high quality edge detection and segmentation. (2) Mutual improvement - the two tasks can help each other with non-trivial performance gains. Designing a principled framework is however nontrivial. The key question is how sparse edge signals can be effectively transformed into dense region-level ones to interact with segmentation. To this end, we propose a learnable recurrent message passing layer where semantic edges are considered ∗Equal contribution. Correspondence to Zhiding Yu <[email protected]>. †Work partially done during an internship at NVIDIA. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). as explicitly learned gating signals to refine segmentation. An overview of our framework is shown in Fig. 1. Specifically, the dynamic message passing layer uses affinity gates to select the neighbor for message passing using max-pooling. It conducts message passing sweeps in each of the four directions: left to right, right to left, top to bottom and bottom to top. The message passing is jointly gated by both the affinity and edge gates, therefore allowing edge cues to naturally influence long-range dense predictions. As such, our framework presents a context module that is clean, compact yet powerful. Our technical contributions can be summarized as follows: • We formulate recurrent message passing as dynamic graph propagation (DGP). We show that such a formulation simplifies the required normalization in propagation networks [13]. It also dynamically finds graph structures that encodes pixel affinities and improves dense prediction. • We propose a double-gate design where message passing is jointly gated by both the affinity and edge gates to refine the segmentation feature map. We show that this design together with the dynamic 1-way connection in DGP better couples segmentation and edge learning. • We obtain state-of-the-art results on joint semantic segmentation and edge detection. We also show that DGP leads to strong zero-shot robustness to natural corruptions with significant improvement over prior methods on Corrupted Cityscapes (Cityscapes-C). Multitasking segmentation and edge learning is desirable for several reasons: 1) There are many downstream applications where both are needed, such as occlusion reasoning [14], localization [15], proposal generation [16–19] and conditional generation [20]. 2) There are many challenging cases where segmentation quality is poor but edge quality is far more superior, e.g. segmentation tends to be inferior near object boundaries since they are often optimized for IoU rather than precision [21]. In these cases, edge learning can potentially capture details missed by segmentation. 3) Implicitly improved model generalization as a result of the coupled learning [22]. 2 Related Work Semantic Segmentation. There is a rich set of prior work in semantic segmentation. Long et al., [7] proposed fully convolutional end-to-end training of semantic segmentation and pointed out its connection to recognition [6]. Chen et al. [8] introduced atrous convolution and atrous spatial pyramid pooling (ASPP) to capture multi-scale image contexts. Another contemporary work is the wide ResNet-38 which explores a relatively shallower but wider backbone [23]. It has also been shown that context plays an important role in segmentation, including context encoding [24, 25], multi-scale context [26–28] as well as relational context [29, 30]. More recently, there has been a surge of interests in segmentation with Transformers [31–34] Boundary/Edge Detection. Similar to segmentation, boundary/edge detection have been widely studied as perceptual grouping problems in early literature [1–3]. Recent methods tend to resort to CNNs. For example, Bertasius et al. proposed a multi-scale deep network architecture for top-down contour detection, where as Xie et al. [11] further introduced holistically-nested edge detection (HED) for end-to-end edge learning. Besides detecting binary edges, Hariharan et al. [35] proposed the Semantic Boundaries Dataset (SBD) which has become a popular benchmark for semantic edge detection. Compared to binary edge detection, semantic edge detection involves the semantic classification of edge pixels in addition to localization which presents additional challenges to existing frameworks. A series of works including HFL [36], CASENet [12], SEAL [37] and STEAL [38] have followed up [35] and pushed the boundaries. Multi-task Segmentation and Edge Learning. Multi-task segmentation and edge learning remains under-studied but is not entirely new. Edges have been pairwise similarity cues to improve segmentation [3, 39, 18] and superpixels [4]. It was shown that edges can be transformed into dense regions through the Laplacian Eigenmaps of boundary transformed affinity [36, 5, 40]. Despite being robust, these methods are generally slow and not end-to-end trainable. For more recent CNN based models, end-to-end multi-task learning on a shared backbone is a natural choice [41]. In addition, proper regularization between segmentation and edge has been shown to improve the performance of both tasks. For example, Takikawa et al. [21] use softmax with temperature to impose consistency between segmentation and semantic edges. Zhen et al. [42] correlate semantic segmentation and edge detection tasks with a consistency loss. Our work goes beyond multi-tasking and let the two task more deeply coupled through dynamic graph propagation. Structured Dense Prediction. Segmentation is a dense prediction task where structured information can be useful. Structured prediction models such as Markov random fields (MRFs) [43], conditional random fields (CRFs) [44–46] and energy minimization [47, 48] have widely proved helpful in segmentation problems by imposing contrast-sensitive smoothness. In addition, structured inference can also be unfolded as network layers for end-to-end training [49, 50], therefore combining the advantages from both ends. Recently, there is an increasing trend to directly model the message passing process itself using multi-dimensional RNN [51–53], graph RNN [54, 55] and spatial propagation network (SPN) [13, 56, 57]. A common advantage is that these methods render more context-aware prediction with larger receptive fields while preserving local details similar to CRFs. Their ability to train end-to-end allows more powerful representation of inter-pixel similarities and thus better dense prediction quality. Our work can be broadly categorized into this category. It is worth mentioning that the contrast sensitive smoothness term in CRF and propagation networks [52, 13] is an implicit modeling of edge signals by learning to relax smoothness constraints at high contrast areas. There have also been variants of SPN [58] that take binary edge as gate regularization. However, none of these works have explicitly addressed multi-tasking learning with category-aware edges whereas the proposed DGP framework presents a novel and effective solution to this problem. 3 Multi-task network As a first step towards coupled segmentation and edge learning, we introduce two novel multi-task networks that are able to perform multi-task learning for both segmentation and edge detection. An overview of their architectures is shown in Fig. 2. The rest of the paper follows these notations and covers more details for each module. Backbone. A CNN giving a semantic feature map (denoted as F ) encoding the segmentation information. There could be multiple choices of the backbone networks, ranging from the popular architectures of DeepLabv2 [8], DeepLabv3 [27] to latest state-of-the-arts such as DeepLabv3+ [28] and Dual-Attention Network [30]. Adopting powerful backbones will surely benefit the system-level performance, but the major purpose of this work does not completely lie in achieving state-of-the-arts. We are more interested in showing the effectiveness of a proposed framework on standard backbones. To this end, we consider both CASENet [12] and ResNet-38 [23] as the standard backbones for benchmarking. Both of them comprehensively covers the feature map resolutions of 1, 1/2, 1/4 and 1/8, with the last resolution being a rule of thumb adopted by many segmentation networks. For both networks, we adopt atrous spatial pyramid pooling layers with dilation rate [6, 12, 18, 24, 30, 36] to capture context information. We set the output channels to be 128 to produce a semantic feature map, fowllowed by either direct segmentation prediction (baselines) or the proposed DGP layer. Affinity stream. Convolution layers that aggregate the cross-layer information of the backbone and produces an affinity feature map (denoted as A) to encode the affinity of pairwise neighboring pixels. This stream is only used with the presence of recurrent message passing layer (including its related baselines). The stream aims to model the simialrity of pair-wise neighboring pixels and serve as a major input to the gating signal in recurrent message passing layer. Compared to edge stream, the affinity stream seeks to capture coarser-level inter-pixel affinity with slightly deeper convolutions and higher dimensions. The resulting feature map A is a 256 dimensional tensor concatenated by ASPP and the side convolutional feature maps from the affinity stream. Edge stream. Dense skip-layer features with abundant details that are combined with edge classification layer through shared concatenation [12] to produce a semantic edge map (denoted as E). We design an improved edge stream over CASENet [12] to better leverage detail information at multiple scales/levels with dense side connections. Unlike CASENet where the bottom blocks only provide 1-channel side features, we densely obtain side features from every sub-block residual modules. In CASENet, this gives side features with a total number of 31 dimensions. Similar rules apply to ResNet-38, where the side feature has a total number of 17 dimensions We also found that applying sigmoid after side features with resolution 1/8 greatly benefits edge training from two aspects: (1) It helps to stabilize and prevent gradient explosions from dense skip connections. (2) It removes the typical aliasing edge issue caused by upsampling/deconvolution, and produces elegant smooth edges. Even though we do not explicitly apply techniques such as edge alignment [37] in this work, the proposed backbone is able to produce high quality edge predictions under noisy labels. We also notice that it is better to remove sigmoid for side features with higher resolutions. Edge prediction is multi-tasked alongside with ASPP using 1× 1 convolution on top of Res Block5. This returns the K classes coarse edge predictions. Similar to CASENet, we upsample all side features together with the K-dimensional edge predictions to full image resolution and apply shared concatenation, where side features are repetitively shared and concatenated with each class for K times, followed by a K-way 3× 3 group convolution to produce semantic edges. Dynamic graph propagation. Learnable recurrent message passing layer that takes the above three branches as input to produce a refined semantic feature map (denoted as F ∗). More details regarding this module will be introduced in the next section. 4 Coupled segmentation and edge learning Although the multi-task network is able to jointly produce segmentation and edge prediction, we are interested in letting these two tasks better coupled to mutually improve each other. To this end, we look into recent spatial propagation networks where edges can be transformed into gating signals that produce long range influence to segmentation. We first give the notations and definitions: 4.1 Notations and Settings We start with a two-dimensional recurrent network architecture with a linear propagation module passing messages (memories) spatially over a feature map. We define a 4-way message passing: 1. Left→ Right (Mode L), Right→Left (Mode R), Top→Bottom (Mode T) and Bottom→Top (Mode B). Each way of message passing will separately generate a hidden state map H which can be approximately viewed as a refined (smoothed) version of F . Finally, we take an element-wise max operation to ensemble them where the model automatically selects the optimal direction with highest neural activation for each pixel: F ∗ ←H , max (HL, HR, HT , HB) (1) To perform message passing in each way, one often needs to pre-define a graph that encodes the message passing paths. For tractability and and computation issues, such graph is often sparsely defined with locally connections between immediate neighboring pixels. We follow spatial propagation network (SPN) [13] by defining a three-way local connection, as illustrated on the hand side of Fig. 3. The advantage of such design is obvious: Taking Mode R propagation as an example, one just needs to initiate the message passing from the right most column, and recurrently pass the message from right to left column by column. In this case, the hidden state of each pixel is directly influenced by the three immediate neighboring pixels on the right column. Details of the propagation under Mode R is illustrated on the left hand side of Fig. 3. Let F ∈ RH×W×C be the feature map input to the propagation module, H ∈ RH×W×C the propagation latent space on top of F . In addition, let hi,t and fi,t denote the hidden state and feature of the i th pixel located at recurrence t3 on H and F , respectively. We denote {pki,t|k ∈ N(i, t)} the set of learnable propagation gating weights for hidden state h(i, t) where N(i, t) is the set of neighbors of pixel (i, t). In this case, the spatial propagation in each mode is defined as: hi,t = ( 1− ∑ k∈N(i,t) pki,t ) fi,t + ∑ k∈N(i,t) pki,t hk,t−1 (2) where is element-wise product, and hk,t−1 is the hidden state of pixel (i, t)’s neighbors from the previous recurrence. {pki,t|k ∈ N(i, t)} is expandable to an affinity matrix, revealing the global and dense message passing among all the pixels of F . 4.2 Dynamic graph propagation (DGP) As a linear module, the above operation requires careful normalization of among {pki,t|k ∈ N(i, t)}. In particular, ∑ k∈N(i,t) p k i,t ≤ 1 should be satisfied since the energy of the hidden state signal gets unbounded easily under the recurrence operation. To normalize the gating weights, one may consider a linear self-normalization scheme [13] where the constraint ∑ k∈N(i,t) |pki,t| ≤ 1 is imposed to guarantee the stableness of the propagation4 via dividing each pki,t with ∑ k∈N(i,t) |pki,t|. But the above formulation also leads to certain limitations. For example, the linear form allows pki,t to be both positive and negative, which potentially encourages pki,t to be large towards either positive or negative side rather than being monotonic. In addition, when pki,t ≥ 0, ∑ k∈N(i,t) p k i,t = 1 which means that such formulation considers zero unary input from fi,t. Therefore, another choice is to constrain pki,t with a probabilistic output: pki,t = exp(p̂ki,t)∑ k∈N(i,t) exp(p̂ k i,t) σ(p̂ki,t), (3) where p̂ki,t is defined as: p̂ki,t = W > a [Ai,t; Ak,t−1], (4) 3There is a one-to-one correspondence between (i, t) and pixel index (h,w). But the mapping varies subject to the propagation mode. In Mode R for example, t corresponds to column with w = W − t whereas h = i+1. 4Similar property holds when the same constraint is applied to each dimension of pki,t in Equation (2). and σ(·) is the Sigmoid function. Note that p̂ki,t is the raw gating activation without normalization, and is obtained via a linear projection on the concatenated affinity stream features. The above formation satisfies both ∑ k∈N(i,t) p k i,t ≤ 1 and pki,t ≥ 0, while partially considering the unary input with the Sigmoid term. Yet the framework is a bit complicated with many non-linear terms. To this end, we take one more step to further sparsifying and simplifying the gating response by considering a softmax-with-temperature formulation and taking the limit of T → 0: pki,t = lim T→0 exp(p̂ki,t)/T∑ k∈N(i,t) exp(p̂ k i,t)/T σ(p̂ki,t) (5) Substituting Equation (5) into Equation (2), we have: hi,t = ( 1− σ(p∗i,t) ) fi,t + σ(p∗i,t) h∗i,t, (6) where the n-th dimension of p∗i,t and h ∗ i,t are defined as: k∗ , argmax k∈N(i,t) {p̂ki,t[n]} p∗i,t[n] = p̂k ∗ i,t [n] h ∗ i,t[n] = hk∗,t−1[n]. (7) Equation (6) essentially leads to a dynamic graph propagation (DGP) framework where one performs message passing on a dynamic graph structure by picking neighbor with the highest response. Intuitively, DGP captures a compact structure that has close relation to directed minimum spanning tree, Chu–Liu/Edmonds’ algorithm [59] and the recently pro- posed tree filters [60, 61]. Such structure presents an inductive bias that benefits segmentation by filtering out noisy prediction and following only strong signals. Figure 4 illustrates DGP where we show three example paths on three feature channels. Each channel independently takes different paths depending on its own neighbor affinities. 4.3 Coupling edge prediction with segmentation To deeply couple edge prediction with segmentation, we further incorporate edge signal into the above dynamic graph propagation by proposing a double-gate framework: hi,t = ( 1− σ(p∗i,t) σ(gi,t) ) fi,t + σ(p∗i,t) σ(gi,t) h∗i,t, (8) where gi,t is defined as: gi,t , We ∗E[i− 1 : i+ 1, t− 1 : t+ 1] (9) Note that the edge gating signal is obtained by via a 3 × 3 convolution on the K-channel edge activation map E, which outputs a vector g sharing the same dimension (128) as p∗. This way, edges are able to actively influence segmentation via edge-sensitive gating on message passing. We also hope that the refined segmentation activation after DGP can alternatively serve as a shape regularizer of the edge prediction. To this end, we output a K-channel edge regularization feature from Fm using 1× 1 convolution, followed by another 3× 3 convolution to fuse with E to produce a refined edge map E∗. This is illustrated in Fig. 1 on the bottom right. One shall see that the above coupled design let the two tasks greatly improve each other, and we term the final framework CSEL. 4.4 Training loss The training loss for CSEL is a multi-task combination of cross-entropy (BCE) losses for multi-label edge learning and the cross-entropy (CE) loss for segmentation: L = LSeg + LEdge = LCE(Conv(F ∗)) + λ(LBCE(E∗) + LBCE(E)) (10) where λ is the parameter controlling the weight of segmentation loss. For segmentation, we use a 3× 3 convolution to linearly project the 128-channel activation F ∗ into a K-channel before the loss. 5 Experiments 5.1 Datasets and metric Cityscapes. Cityscapes [62] contains 2975 training images, 500 validation images and 1525 private testing images with 19 pre-defined semantic classes. The dataset has been widely adopted as the standard benchmark for both semantic segmentation and semantic edge detection. Following a number of previous works [12, 37, 38, 21], we comprehensively conduct ablation and quantitative studies for both segmentation and edge detection on the validation set. SBD. The Semantic Boundaries Dataset [35] contains both category-level and instance level semantic segmentation annotations. The dataset contains 11355 images (8498 for training and 2857 for testing) from the trainval set of PASCAL VOC2011 [63], and follows the 20-class VOC definition. PASCAL VOC 2012 [64] is a semantic segmentation dataset with 1464 (training) and 1449 (val) and 1456 (test) images. We use the augmented dataset with 10582 training images, as in [28]. The dataset contains 20 foreground object classes and 1 background class. COCO Panoptic [10] contains the mask annotations for both things and stuff, with a split setting (118K train 2017 images and 5K validation 2017 images) following the detection community. Evaluation metrics. For evaluation, We consider intersection-over-union (IoU) for segmentation, and the maximum F-score (MF) at optimal dataset scale (ODS) for edge detection5. We also consider the boundary F-score proposed in [21] for direct comparison on edge detection task. 5.2 Implementation details Data loading. During training, we unify the training crop size as 1024 × 1024 for Cityscapes, 472 × 472 for SBD and VOC12, and 464 × 464 for COCO Panoptic. All models are trained with 150k iterations on Cityscapes with batch size 8, 30k iterations on SBD and VOC12 with batch size 16, and 220k iterations on COCO Panoptic with batch size 16. We also perform data-augmentation with random mirror, scaling (scale factors in [0.5, 2.0]) and color jittering. Optimization. we apply an SGD optimizer with a weight decay of 5 × 10−4 during training. For baselines and methods that involving CSEL, we additionally apply a second ADAM optimizer to the propagation layers. In our case, we empirically found that ADAM optimizer is more capable of optimizing the propagation layers with better performance. Learning rate and loss. The base learning rates for methods with ResNet-101/ResNet-38 backbones are unified as 3.0 × 10−8/7.0 × 10−8 across Cityscapes, SBD and VOC12. On COCO Panoptic, the base learning rate is unified as 5.0 × 10−8 for all comparing methods. Unless indicated, the segmentation weight λ in Eq (7) is empirically set it to 0.5 to balance LSeg and LEdge. We found our method not sensitive to λ, and setting λ = 0.5 works excellently for all backbones and datasets. Using Mapillary Vistas. On Cityscapes, a number of literature consider large-scale pretraining on Mapillary Vistas (MV) [65], which is shown to considerably benefit the segmentation performance. Unless indicated, our method does not adopt Mapillary Vistas pre-training for fair comparison. 5.3 Experiments on Cityscapes Table 1: Ablation study of Semantic segmentation on Cityscapes validation set. method ResNet-101 ResNet-38 ST 77.9 78.55 MT 78.4 79.43 SPN 80.0 - SPN+Edge 80.4 - DGP 81.3 - CSEL− 80.9 - CSEL 82.8 82.8 CSEL-Ms 83.7 83.4 Ablation study (semantic segmentation). Table 1 shows the ablation studies on semantic segmentation where we consider apple-to-apple compared methods that are trained following the same backbones and training protocols: 1) ST: Single task segmentation network without propagation layer but keeping the 128-channel feature map F . 2) MT: The same as ST but naively multi-tasking segmentation with edge detection. 3) SPN [13]: Implementation of SPN with the 3-way propagation on F , where gating signal is computed using the affinity stream A. 4) SPN+Edge: Coupling SPN with edge learning following exactly the same double gate design. 5) DGP: Baseline which 5we follow [37] by applying exactly the same parameters and settings. Table 2: Main results of semantic segmentation and semantic edge learning on Cityscapes. (a) Main semantic segmentation results on the Cityscapes validation set. Method Backbone MV mIoU PSPNet [26] ResNet-101 78.8 DeepLabV3+ [28] ResNet-101 78.8 CCNet [66] ResNet-101 80.5 GSCNN [21] ResNet-101 74.7 DANet [30] ResNet-101 81.5 RPCNet [42] ResNet-101 82.1 CSEL ResNet-101 83.7 SGPN [57] ResNet-38 80.9 GSCNN† [21] ResNet-38 80.8 VRec-JP (LR) [67] ResNet-38 81.4 Axial-DeepLab [68] AxiaiRes-XL 81.1 CSEL ResNet-38 83.4 (b) Main semantic segmentation results on the Cityscapes test set. Method Backbone MV mIoU PSPNet [26] ResNet-101 78.4 PSANet [69] ResNet-101 80.1 ASPP [27] ResNet-101 80.1 BFP [58] ResNet-101 81.4 DA-Net [30] ResNet-101 81.5 OCR [29] ResNet-101 81.8 CCNet [66] ResNet-101 81.9 RPCNet [42] ResNet-101 81.8 CSEL ResNet-101 82.1 GSCNN ResNet-38 X 82.9 CSEL ResNet-38 X 83.5 (c) Main results of semantic edge detection on the Cityscapes validation set. Results measured by Maximum F-Score (MF) at optimal data scale. Method Backbone MF (IS) MF (Non-IS) CASENet [12] ResNet-101 68.1 68.9 SEAL [37] ResNet-101 69.1 - STEAL [38] ResNet-101 69.7 71.4 RPCNet [42] ResNet-101 - 78.2 CSEL ResNet-101 78.1 78.3 CSEL ResNet-38 - 78.7 contains the proposed DGP layer without the double gate design. 6) CSEL−: Baseline which is the same as CSEL except for removing the edge loss. 7) CSEL: Our full method with single scale inference. 8) CSEL-MS: CSEL with multiscale inference at {0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0}. We also consider both instance-sensitive (IS) and non-instance-sensitive (Non-IS) settings where the edge training/evaluation labels are with/without instance-edges. Discussions. We make sure that the comparisons are apples to apples and fair, by using the same backbones and training recipes (such as learning rate, crop size, batch size, and number of iterations) for all comparing methods.The DGP baseline can be considered as an apples to apples counterpart of SPN, whereas the CSEL− baseline is a further study to understand whether the improvement in CSEL purely comes from the enriched representation with edge features. One could observe several trends from the results: 1) The proposed multi-task network consistently outperforms its single task counterpart. 2) CSEL outperforms both SPN and SPN+Edge with the dynamic graph propagation and edge gating. 3) The dynamic graph design in DGP alone helps it to achieve 81.3% mIoU, outperforming both SPN and SPN+Edge by 1.3% and 0.9%, respectively. 4) The incorporation of edge guidance with the double gate design further leads to another non-trivial 1.5% improvement. Simply adding the edge feature does not improve the segmentation quality. In fact, CSEL− is even slightly lower than DGP where no edge features are involved. This reflects the importance of edge signal as a structural guidance than making the representation more expressive. Comparison to state-of-the-art (semantic segmentation). We also compare CSEL to state-of-the art semantic segmentation models and list the results in Table 2a and 2b. For clarity, we divide to table into two parts with models using the same type of backbone putting together. Among the comparing methods, CSEL obtains the best results using both ResNet-101 and ResNet-38 backbones. Ablation study (edge detection). We additionally conduct experiments on edge detection. We first show an ablation study in Table 3, where ST indicates single-task edge detection network with the proposed edge stream. MT indicates naive multi-task training of both segmentation and edge. MT leads to slight performance degradation on the edge detection task compared to single-task training on edge detection. However, the degradation is marginal compared to the significant improvement of the edge quality introduced by deep coupling of the two tasks in CSEL. Again, this shows the considerable benefit of coupled learning with both segmentation and edge detection. Comparison to state-of-the-art (edge detection). We also compare to state-of-the-art semantic edge detection methods in Table 2c where we present the training/evaluation protocols in two categories (IS and NonIS) following SEAL. Note that we evaluate the results with the edge thinning protocol. One could see that our method achieves the best performance in both settings with significant performance gain from the coupled edge and segmentation learning, as well as the benefit from the dynamic graph propagation module. Besides the evaluation protocol from SEAL, we also follow a separate evaluation protocol from GSCNN [21] in Table 4, where we use the original evaluation code base from GSCNN to evaluate the boundary quality of semantic segmentation. One can observe that at all different thresholds, CSEL with both ResNet-101 and ResNet-38 backbones outperform DeepLabV3+ and GSCNN by significant margins in terms of boundary quality. 5.4 Experiments on SBD We also evaluate CSEL on the SBD dataset and compare with previous state-of-the-art semantic edge detection models. The main results are presented in Table 5. Note that we used the re-annotated SBD test set from [37] to pursue more precise semantic edge evaluation. From the table, one could see CSEL outperforms other state-ofthe-art methods in both IS and Non-IS settings. 5.5 Experiments on PASCAL VOC12 We evaluate CSEL on the PASCAL VOC12 dataset and report the main results of semantic segmentation in Table 6. The performance is evaluated as the mean IoU over the 20 PASCAL VOC classes plus the background class. We compare CSEL with previous competitive methods that are state-of-theart (DeepLabv3+) or considerably related (SPN/DFN). CSEL achieves considerable improvement these methods. 5.6 Experiments on COCO Panoptic We further conduct experiments on the full COCO Panoptic dataset. Several earlier methods have reported results on COCOStuff 10K which is an earlier version of COCO with much less data. However, this does not lead to significant differences with VOC12 in both the size and the diversity of data. We therefore adopt the full COCO Panoptic dataset which is significantly larger and is widely accepted by the detection and instance/panoptic segmentation communities. We hope that our work present solid baselines that inspires subsequent research on this benchmark. Specifically, we convert the instance segmentation annotations to semantic segmentation ones, and compare segmentation methods with or without the proposed CSEL module. All comparing methods adopt the same CASENet (ResNet-101) architecture. Other hyperparameters are kept exactly the same. Models are trained on the train2017 split and tested on val2017 with single scale inference. From the results in Table 7, one could see that adding CSEL leads to consistently improved results over the naïve segmentation baselines. In addition, multi-task learning with both segmentation and edge losses also slightly outperforms the single segmentation. Finally, CSEL slightly outperforms SPN with reduced gain compared to results on Cityscapes. We hypothesize that the reduced gain is partly caused by the noisier mask annotations on COCO Panoptic which makes edge learning more challenging and decreases the edge gate quality. 5.7 Robustness Against Natural Corruptions We hypothesize that CSEL also brings robustness to the segmentation model. We show that this is the case and empirically verify on Cityscapes-C in Table 8. Specifially, we follow the standard corruption package provided by [70] to corrupt the Cityscapes validation images. This expands the Cityscapes validation set with 16 types of algorithmically generated corruptions from 4 major categories: “Noise”, “Blur”, “Weather” and “Digital”. Each corruption type also contains 5 severity levels, leading to 2500 evaluation images for each type alone6. Our result shows that CSEL overall improves the robustness significantly, especially compared to ResNet-38 and GSCNN [21] which share the same backbone. On the other hand, the method does 6We only consider level 1-3 for the “Noise” category following [71]. still show some vulnerability to certain type of corruptions, particularly those belonging to the “Noise” category. However, this trend is aligned with ResNet-38 and GSCNN and we hypothesize that it may be partly related to the specific design of ResNet-38 based on this pattern. 6 Conclusion We proposed a unified framework for coupled segmention and edge learning. Our work revisits the two long-standing and important perceptual grouping problems - semantic segmentation and edge detection. Our method (CSEL) includes a novel end-to-end multi-task network, a recurrent dynamic graph propagation layer, as well as deep coupling of the two tasks on top on top of them. Finally, results show that careful coupling of the tasks leads to significant improvement on both of them. Broader Impact Our method coupled two long-standing important vision problems, semantic segmentation and edge detection, under a unified framework. Besides the various practical benefits from deeply coupling these two tasks, we expect the research to inspire considerable insights, interests and revisits on perceptual grouping, mid-level representations and structured prediction. The result of the research is likely to find diverse scene understanding applications such as autonomous driving and robot navigation. Like many other discriminative recognition models, our method inevitably faces challenges from input data quality and underlying data biases. The model behavior is subject to various factors such as data distributions, domain gaps, label quality and fairness. We encourage researchers to focus on the “in the wild” robustness when the above challenges are present. Acknowledgement We thank the NVIDIA GPU Cloud (NGC) team for the computing support of this work. We also thank the anonymous reviewers and the other NVIDIA colleagues who helped to improve this work with discussions and constructive suggestions. 7 Checklist 1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] (b) Did you describe the limitations of your work? [Yes] Please see result analysis on Cityscapes-C. (c) Did you discuss any potential negative societal impacts of your work? [Yes] Please see Broader Impact. (d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes] 2. If you are including theoretical results... (a) Did you state the full set of assumptions of all theoretical results? [N/A] (b) Did you include complete proofs of all theoretical results? [N/A] 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experi- mental results (either in the supplemental material or as a URL)? [No] Going through legal approval process at this moment. Will release source code upon approval. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Please refer to implementation details in both the main paper and the supplementary. (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [No] We observed stable mIoU results for semantic segmentation models. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] Please refer to the supplementary. 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] (b) Did you mention the license of the assets? [Yes] The data used in our work is open source and can be used for adademic research. (c) Did you include any new assets either in the supplemental material or as a URL? [No] (d) Did you discuss whether and how consent was obtained from people whose data you’re using/curating? [N/A] The data used in our work is open source and can be used for adademic research. (e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A] The data does not contain personally identifiable information or offensive content. 5. If you used crowdsourcing or conducted research with human subjects... (a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A] (b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A] (c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
1. What is the focus and contribution of the paper on coupling edge prediction and semantic segmentation? 2. What are the strengths of the proposed method, particularly in its extension of spatial propagation network SPN with a dynamic graph propagation scheme? 3. What are the weaknesses of the paper regarding its explanations, figures, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Do you have any questions or concerns regarding the proposed method, its applications, or its contributions to the field?
Summary Of The Paper Review
Summary Of The Paper The paper proposes coupling edge prediction and semantic segmentation using a multi-task network dubbed CSEL. Some of their goals include the mutual improvement of performance on the two tasks when learned together. The proposed model (CSEL) extends spatial propagation network SPN with a dynamic graph propagation (DGP) scheme. DGP is a gated recurrent graph message passing that is responsible for selecting neighboring pixels with the highest response. In DPG, the semantic edges are considered as gating signals to refine semantic segmentation. Review The proposed method is nontrivial and provides some additional gains over SOAT. The paper also contains an insightful ablation study, which is positive. In many experiments, the performance gains are not very expressive, which makes me feel lukewarm about this approach. A major issue to me seems to be the lack of clarity in the explanations and figures. I feel it is hard for a reader to understand the concepts/architecture and, replicate the work for future research. I would not recommend this paper for publication in this current state, however, I will keep an open mind for post rebuttal remarks. Some more detailed comments are presented below: I would appreciate hearing from the authors on why the edges only serve segmentation only as gating signals? Could it not be helpful to enrich the segmentation features as well? Actually, I believe this is one point where clarification could be improved. The message passing figures seem a bit confusing to me and the legends are not informative. Figure 1, again the legend could be improved. There are elements in the figure that are not explained in the legend, which forces the reader to look for them in the text. In the same figure, it does not show how the message passing is happening in DGP and which features are involved, which makes it harder to understand the overall framework. Clearer comparison with SPN, since the model seems to build over SPN, I was expecting a closer analysis on the difference between them plus, the exact difference in terms of technical components. I was expecting some qualitative results of CSEL in the mains paper (there are some in suppl. which is ok). However, I could not find any qualitative comparison of CSEL and SPN, and I was curious about how the performance gains of CSEL translate visually against SPN+edge, for example.
NIPS
Title GradAug: A New Regularization Method for Deep Neural Networks Abstract We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy. By combining with CutMix, it further boosts the performance to 79.67%, which outperforms an ensemble of advanced training tricks. The generalization ability is evaluated on COCO object detection and instance segmentation where GradAug significantly surpasses other state-of-the-art methods. GradAug is also robust to image distortions and FGSM adversarial attacks and is highly effective in low data regimes. Code is available at https: //github.com/taoyang1122/GradAug N/A We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy. By combining with CutMix, it further boosts the performance to 79.67%, which outperforms an ensemble of advanced training tricks. The generalization ability is evaluated on COCO object detection and instance segmentation where GradAug significantly surpasses other state-of-the-art methods. GradAug is also robust to image distortions and FGSM adversarial attacks and is highly effective in low data regimes. Code is available at https: //github.com/taoyang1122/GradAug 1 Introduction Deep neural networks have achieved great success in computer vision tasks such as image classification [1, 2], image reconstruction [3, 4], object detection [5, 6] and semantic segmentation [7, 8]. But deep neural networks are often over-parameterized and easily suffering from over-fitting. Regularization [9, 10] and data augmentation [1, 11] are widely used techniques to alleviate the over-fitting problem. Many data-level regularization methods [10, 12, 13] have achieved promising performance in image classification. These methods are similar to data augmentation where they put constraints on the input images. Although effective in image classification, these methods are hard to apply to downstream tasks such as object detection and segmentation due to their special operations. For example, the state-of-the-art CutMix [13] can not be directly applied to object detection because first, mixing samples will destroy the semantics in images; second, it is hard to interpolate the labels in these tasks. Another category of regularization methods imposes constraints on the network structures. [14] proposes that adding noises to the network gradients can improve generalization. Other methods [9,15,16] randomly drop some connections in the network, which implicitly introduce random noises in the training process. These methods are usually more generic but not as effective as data-level regularization. In this paper, we introduce Gradient Augmentation (GradAug), which generates meaningful disturbances to the gradients by the network itself rather than just adding random noises. The idea is that when a random transformation (e.g., random rotation, random scale, random crop, etc.) is applied to an image, a well-generalized network should still recognize the transformed image as the same object. Different from the regular data augmentation technique which only regularizes the full-network, we regularize the representations learned by a set of sub-networks, which are randomly sampled 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. from the full network in terms of the network width (i.e., number of channels in each layer). Since the representation of the full network is composed of sub-networks’ representations due to weights sharing during the training, we expect sub-networks to learn different representations from different transformations, which will lead to a well-generalized and diversified full network representation. We conduct a comprehensive set of experiments to evaluate the proposed regularization method. Using a simple random scale transformation, GradAug can improve the ImageNet Top-1 accuracy of ResNet-50 from 76.32% to 78.79%, which is a new state-of-the-art accuracy. By leveraging a more powerful data augmentation technique – CutMix [13], we can further push the accuracy to 79.67%. The representation’s generalization ability is evaluated on COCO object detection and instance segmentation tasks (Section 4.4). Our ImageNet pretrained model alone can improve the baseline MaskRCNN-R50 by +1.2 box AP and +1.2 mask AP. When applying GradAug to the detection framework, it can outperform the baseline by +1.7 box AP and +2.1 mask AP. Moreover, we demonstrate that GradAug is robust to image corruptions and adversarial attacks (Section 4.5) and is highly effective in low data settings (Section 4.6). 2 Related Work Data augmentation. Data augmentation [1, 11, 17] increases the amount and diversity of training data by linear or non-linear transformations over the original data. In computer vision, it usually includes rotation, flipping, etc. Recently, a series of regularization methods use specially-designed operations on the input images to alleviate over-fitting in deep neural networks. These methods are similar to data augmentation. Cutout [10] randomly masks out a squared region on the image to force the network to look at other image context. Dropblock [18] shares a similar idea with Cutout but it drops a region in the feature maps. Although they have achieved improvements over the regular data augmentation, such region dropout operations may lose information about the original images. Mixup [12] mixes two samples by linearly interpolating both the images and labels. CutMix [13] combines Cutout and Mixup to replace a squared region with a patch from another image. Other mixed sample variants [19, 20] all share similar ideas. While effective in image classification, the mixed sample augmentation is not natural to be applied to tasks such as detection and segmentation due to semantic and label ambiguities. In contrast, the proposed GradAug is a task-agnostic approach which leverages the most common image transformations to regularize sub-networks. This allows the method to be directly applied to different vision tasks and easily amenable for other applications. Structure regularization. Another category of regularization methods imposes constraints on the network weights and structure to reduce over-fitting. [14] points out that adding random noises to the gradients during training can help the network generalize better. Dropout [9] randomly drops some connections during training to prevent units from co-adapting. The random dropping operation also implicitly introduces random noises into the training process. Many following works share the idea of Dropout by randomly dropping network layers or branches. Shake-Shake [21] assigns random weights to residual branches to disturb the forward and backward passes. But it is limited to three-branch architectures. ShakeDrop [22] extends Shake-Shake to two-branch architectures (e.g., ResNet [2] and PyramidNet [23]). However, its application is still limited. [15] randomly drops a subset of layers during training. The final network can be viewed as an ensemble of many shallow networks. Although these methods have shown improvements on image classification, they are usually not as effective as data-level regularization strategies. Moreover, their generalization and effectiveness are not validated on other tasks. GradAug leverages the advantages of both categories of methods. It uses different augmentations to regularize a set of sub-networks generated from the full network in the joint training process. This introduces self-guided disturbances to the gradients of the full network rather than adding random noises. The method is more effective and generic than previous techniques. 3 GradAug 3.1 Algorithm When applying some random transformations to an image, human can still recognize it as the same object. We expect deep neural networks to have the same generalization ability. GradAug aims to regularize sub-networks with differently transformed training samples. There are various of methods to generate sub-networks during training. Previous works [9, 15, 16] usually stochastically drop some neurons, layers or paths. In GradAug, we expect the final full-network to take advantage of the learned representations of the sub-networks. Therefore, we sample sub-networks in a more structured manner, that is by the network width. We define θ as the model parameter. Without loss of generality, we use convolutional layers for illustration, then θ ∈ Rc1×c2×k×k, where c1 and c2 are number of input and output channels, k is the convolution kernel size. We define the width of a sub-network as w ∈ [α, 1.0], where α is the width lower bound. The weights of the sub-network is θw. Different from random sampling, we always sample the first w × 100% channels of the full-network and the sub-network weights are θw ∈ Rwc1×wc2×k×k. In this way, a larger sub-network always share the representations of a smaller sub-network in a weights-sharing training fashion, so it can leverage the representations learned in smaller sub-networks. Iteratively, sub-networks can construct a full-network with diversified representations. Figure 1 shows the class activation maps (CAM) [24] of the sub-network and full-network. The full-network pays attention to several regions of the object because it can leverage the representation of the sub-network. For example, when the sub-network (w = 0.9) focuses on one dog in the image, the full-network shares this attention and uses the other network part to capture the information of another dog. Therefore, the full-network learns richer semantic information in the image, while the baseline model only models a single region and does not fully comprehend the salient information of the image. To make the method simple and generic, we choose among the most commonly used transformations such as random scales, random rotations, random crops, etc. In the experiments, we show that a simple random scale transformation can already achieve state-of-the-art performance on image classification, and it can be directly applied to other applications. Moreover, we can use more powerful augmentations such as CutMix for further enhanced performance. Training procedure. The training procedure of GradAug is very similar to the regular network training. In each training iteration, we train the full-network with the original images, which is the same as the regular training process. Then we additionally sample n sub-networks and train them with randomly transformed images. Finally, we accumulate the losses of full-network and sub-networks to update the model weights. This naive training approach achieves good training accuracy but the testing accuracy is very low. This is caused by the batch normalization (BN) [25] layers. The BN layer will collect a moving average of training batches’ means and variances during training. The collected mean and variance will be used during inference. However, the batch mean and variance in the sub-networks can be very different from those in the full-network because the training samples are randomly transformed. This will cause the final BN mean and variance to be inappropriate for the full-network during inference. But in the training phase, BN uses the mean and variance of the current batch, so the training behaves normally. To obtain the correct BN statistics for the full-network, we do not update BN mean and variance when training the sub-networks. Only the full-network is allowed to collect these statistics. However, the weights in BN layer are still updated by sub-networks because they can be shared with full-network. To further improve the performance, we also leverage two training tricks in [26]. First, we use the output of the full-network as soft labels to train the sub-networks. Second, we always sample the smallest sub-network (i.e., w = α) during training if n > 1. The effect of these two training tricks is provided in the supplementary material. The Pytorch-style pseudo-code of GradAug is presented in Algorithm 1. 3.2 Analysis of gradient property We provide an in-depth analysis of GradAug from the perspective of gradient flow. For simplicity, we consider a fully connected network with 1-D training samples. We define the network as N . The parameter of one layer in the full-network is θ ∈ Rc1×c2 . The parameter of sub-networks is θw as explained in Section 3.1. x ∈ Rd is the training sample and y is its label. The output of the network is denoted as N(θ, x), and the training loss is l(N(θ, x), y) where l is the loss function, which is often the cross entropy in image classification. The loss and gradients in a standard training process Algorithm 1 Gradient Augmentation (GradAug) Input: Network Net. Training image img. Random transformation T . Number of sub-networks n. Subnetwork width lower bound α. . Train full-network. Forward pass, outputf = Net(img) Compute loss, lossf = criterion(output, target) . Regularize sub-networks. for i in range(n) do Sample a sub-network, subneti = Sample(Net, α) Fix BN layer’s mean and variance, subneti.track_running_stats = False Forward pass with transformed images, outputi = subneti(T i(img)) Compute loss with soft labels, lossi = criterion(outputi, outputf ) end for Compute total loss, L = lossf + ∑n i=1 lossi Compute gradients and do backward pass are computed as Lstd = l(N(θ, x), y), gstd = ∂Lstd ∂θ , (1) where gstd ∈ Rc1×c2 . Structure regularization methods [9, 15, 16] randomly drop some connections in the network, and their loss and gradients can be computed as Lsr = l(N(θrand, x), y), gsr = ∂Lsr ∂θrand . (2) We can view gsr has the same shape as gstd where the gradients of disabled connections are 0. Therefore, we can rewrite gsr as gsr = gstd + gnoise, (3) where gnoise ∈ Rc1×c2 is a random matrix which introduces some random disturbances to the gradients. In contrast, GradAug applies more meaningful disturbances to the gradients. Let T be the random transformation operation (e.g., random scale, random rotation, etc.) and T i be the transformation to sub-network i (i = [1, ..., n]). The loss and gradients are computed as: LGA = l(N(θ, x), y) + n∑ i=1 l(N(θwi , T i(x)), N(θ, x)) gGA = ∂l(N(θ, x), y) ∂θ + n∑ i=1 ∂l(N(θwi , T i(x)), N(θ, x)) ∂θwi = gstd + g ′. (4) gGA has a similar form with gsr. The first term is the same as the gradients in standard training. But the second term g′ is derived by the sub-networks with transformed training samples. Since sub-networks are part of the full-network, we call this term “self-guided”. It reinforces good descent directions, leading to improved performance and faster convergence. g′ can be viewed as an augmentation to the raw gradients gstd. It allows different parts of the network to learn diverse representations. The gradients of data-level regularization methods are similar to gstd, with the difference only in the training sample. The gradients are gdr = ∂l(N(θ, f(x)), y) ∂θ , (5) where f is the augmentation method such as CutMix. GradAug can also leverage these augmentations by applying them to the original samples and then following random transformations. The gradients become gGA = ∂l(N(θ, f(x)), y) ∂θ + n∑ i=1 ∂l(N(θwi , T i(f(x))), N(θ, f(x))) ∂θwi = gdr + g ′. (6) g′ is still an augmentation to gdr. Data augmentation can also be combined with other structure regularization methods. However, similar to the derivations in Eq. 2 and Eq. 3, such combination strategy introduces random noises to gdr, which is not as effective as GradAug as shown in Table 3. 4 Experiments We first evaluate the effectiveness of GradAug on image classification. Next, we show the generalization ability of GradAug on object detection and instance segmentation. Finally, we demonstrate that GradAug can improve the model’s robustness to image distortions and adversarial attacks. We also show GradAug is effective in low data settings and can be extended to semi-supervised learning. 4.1 ImageNet classification Implementation details. ImageNet [27] dataset contains 1.2 million training images and 50,000 validation images in 1000 categories. We follow the same data augmentations in [13] to have a fair comparison. On ResNet-50, we train the model for 120 epochs with a batch size of 512. The initial learning rate is 0.2 with cosine decay schedule. We sample n = 3 sub-networks in each training iteration and the width lower bound is α = 0.9. For simplicity, we only use random scale transformation for sub-networks. That is the input images are randomly resized to one of {224× 224, 192× 192, 160× 160, 128× 128}. Note that we report the final-epoch accuracy rather than the highest accuracy in the whole training process as is reported in CutMix [13]. We evaluate GradAug and several popular regularization methods on the widely used ResNet-50 [2]. The results are shown in Table 1. GradAug achieves a new state-of-the-art performance of 78.79% based on ResNet-50. Specifically, GradAug significantly outperforms the structure regularization methods by more than 1 point. As illustrated in Eq. 3 and Eq. 4, GradAug has a similar form with structure regularization. The difference is that GradAug introduces self-guided disturbances to augment the raw gradients. The large improvement over the structure regularization methods clearly validates the effectiveness of our proposed method. As shown in Eq. 6, GradAug can be seamlessly combined with data augmentation. We combine GradAug with CutMix (p=0.5) and denote this method as GradAug†. We compare GradAug† with bag of tricks [28] at the bottom of Table 1. It is evident that GradAug† outperforms bag of tricks both in model complexity and accuracy. Note that bag of tricks includes a host of advanced techniques such as model tweaks, training refinements, label smoothing, knowledge distillation, Mixup augmentation, etc., while GradAug is as easy as regular model training. Due to the sub-networks in GradAug training, one natural question arises: Would the training cost of GradAug increase significantly? As stated in [13], typical regularization methods [12, 13, 18] require more training epochs to converge, while GradAug converges with less epochs. Thus the total training time is comparable. The memory cost is also comparable because sub-networks do forward and back-propagation one by one, and only their gradients are accumulated to update the weights. Table 2 shows the comparison on ImageNet. The training cost is measured on an 8× 1080Ti GPU server with a batch size of 512. We can see that the training time of GradAug is comparable with state-of-the-art regularization methods such as CutMix. 4.2 Cifar classification Implementation details. We also evaluate GradAug on Cifar-100 dataset [29]. The dataset has 50,000 images for training and 10,000 images for testing in 100 categories. We choose WideResNet [30] and PyramidNet [23] structures as they achieve state-of-the-art performance on Cifar dataset. We follow the training setting in [23,30] in our experiments. For WideResNet, we train the model for 200 epochs with a batch size of 128. The initial learning rate is 0.1 with cosine decay schedule. Weight decay is 0.0005. PyramidNet is trained for 300 epochs with a batch size of 64. The initial learning rate is 0.25 and decays by a factor of 0.1 at 150 and 225 epochs. Weight decay is 0.0001. We use random scale transformation where input images are resized to one of {32× 32, 28× 28, 24× 24}. The number of sub-networks is n = 3 and the width lower bound is α = 0.8. The results are compared in Table 3. GradAug is comparable with the state-of-the-art CutMix, and it clearly outperforms the best structure regularization method ShakeDrop, which validate the effectiveness of the self-guided augmentation to the raw gradients. We further illustrate this by comparing GradAug† with CutMix + ShakeDrop. On WideResNet, ShakeDrop severely degrades the Top-1 accuracy of CutMix by 2.44%, while GradAug consistently improves CutMix by more than 1 point. The reason is that ShakeDrop introduces random noises to the training process, which is unstable and ineffective in some cases. However, GradAug is a self-guided augmentation to the gradients, which makes it compatible with various structures and data augmentations. 4.3 Ablation study We study the contribution of random width sampling and random transformation to the performance, respectively. We also show the impact of the number of sub-networks n and the width lower bound α. The experiments are conducted on Cifar-100 based on the WideResNet-28-10 backbone. Random width sampling and random transformation. We study the effect of one component by abandoning the other one. First, we do not randomly sample sub-networks. Then GradAug becomes multi-scale training in our experiments. In each iteration, we feed different scaled images to the network. Second, we do not conduct random scale transformation. In each iteration, we sample 3 sub-networks and feed them with the original images. The results are shown in Table 4. Random scale and random width sampling only achieve marginal improvements over the baseline, but GradAug remarkably enhances the baseline (+2.43%). This reaffirms the effectiveness of our method, which unifies data augmentation and structure regularization in the same framework for better performance. Number of sub-networks and width lower bound. There are two hyperparameters in GradAug, the number of sub-networks n and sub-network width lower bound α. We first explore the effect of n. Other settings are the same as Section 4.2. The results are shown in Figure 2. A larger n tends to achieve higher performance since it involves more self-guided gradient augmentations. The accuracy plateaus when n ≥ 3. Note that even one sub-network can significantly improve the baseline. Then we investigate the impact of width lower bound α by fixing other settings. As shown in Figure 2, α = 0.8 achieves the best accuracy, but all the values clearly outperform the baseline. GradAug is not sensitive to these hyperparameters. Empirically, we can set n ≥ 3 and α ∈ [0.7, 0.9]. Effect of different transformations. As shown in experiments above, GradAug is very effective when leveraging random scale transformation and CutMix. Here we further explore other transformations, including random rotation transformation and the combination of random scale and rotation transformations. We conduct the experiments on WideResNet-28-10 and ResNet-50 following the settings above. For random rotation, we randomly rotate the images by a degree of {0◦, 90◦, 180◦, 270◦}. For the combination, the input images are first randomly rotated and then randomly resized. The results are shown in Table 5. It is clear that both transformations (random scale and random rotation) and their combination achieve significant improvements over the baseline. This validates our idea of regularizing sub-networks by different transformed images. Generating sub-networks by stochastic depth. In the experiments above, we generate subnetworks by cutting the network width. Similarly, we can generate sub-network by shrinking the network depth. We follow StochDepth [15] to randomly drop some layers during training. The training settings are the same as [15] and we use random scale transformation to regularize subnetworks. As shown in Table 6, GradAug significantly outperforms the baseline and StochDepth. This demonstrates that GradAug can be generalized to depth-shortened sub-networks and again verifies the effectiveness of our idea. 4.4 Object detection and instance segmentation To evaluate the generalization ability of the learned representations by GradAug, we finetune its ImageNet pretrained model for COCO [31] object detection and instance segmentation. The experiments are based on Mask-RCNN-FPN [6, 32] framework and MMDetection toolbox [33] on ResNet-50 backbone. Mixup and CutMix, two most effective methods in image classification, are employed for comparison. As explained in Section 2, Mixup and CutMix are mixed sample data augmentation methods, which can not be applied to object detection and segmentation. Therefore, we compare these methods by directly finetuning their ImageNet pretrained models on COCO dataset. All models are trained with 1× schedule on COCO dataset. The image resolution is 1000 × 600. The mean Average Precision (AP at IoU=0.50:0.05:0.95) is reported in Table 7. We can see that although Mixup and CutMix achieve large improvements on ImageNet classification, the learned representations can barely benefit object detection and segmentation. In contrast, GradAug-pretrained model considerably improves the performance of Mask-RCNN. This validates that GradAug enables the model to learn well-generalized representations which transfer well to other tasks. Moreover, the training procedure of GradAug can be directly applied to the detection framework. The result (last line of Table 7) shows that it further boosts the performance as compared with GradAug-pretrained and can significantly improve the baseline by +1.7 det mAP and +2.1 seg mAP. The implementation details and qualitative results are in the supplementary material. 4.5 Model robustness Deep neural networks are easily fooled by unrecognizable changes on input images. Developing robust machine learning models is pivotal for safety-critical applications. In this section, we evaluate the model robustness to two kinds of permutations, image corruptions and adversarial attacks. Image corruption. ImageNet-C dataset [34] is created by introducing a set of 75 common visual corruptions to ImageNet classification. ImageNet-C has 15 types of corruptions drawn from four categories (noise, blur, weather and digital). Each type of corruption has 5 levels of severity. Corruptions are applied to validation set only. Models trained on clean ImageNet should be tested on the corrupted validation set without retraining. We follow the evaluation metrics in [34] to test ResNet-50 trained by different regularization methods. The mean corruption error (mCE) is reported in Table 8. Mixup has lower mCE than other methods. We conjecture the reason is that Mixup proportionally combines two samples, which is in a similar manner to the generation of corrupted images. GradAug outperforms the second best competing method CutMix by 1.4%. Note that GradAug can also be combined with Mixup and we denote it as GradAug*. The results in Table 8 reveal that GradAug* further improves Mixup and achieves the lowest mCE. This demonstrates that GradAug is capable of leveraging the advantages of different augmentations. Adversarial attack. We also evaluate model robustness to adversarial samples. Different from image corruption, adversarial attack uses a small distortion which is carefully crafted to confuse a classifier. We use Fast Gradient Sign Method (FGSM) [35] to generate adversarial distortions and conduct white-box attack to ResNet-50 trained by different methods. The classification accuracy on adversarially attacked ImageNet validation set is reported in Table 9. Note that here Mixup is not as robust as to image corruptions, which validates our aforementioned conjecture in the image corruption experiment. GradAug and CutMix are comparable and both significantly outperform other methods. GradAug† further gains improvements over GradAug and CutMix, manifesting superiority of our self-guided gradient augmentation. 4.6 Low data setting Deep neural network models suffer from more severe over-fitting when there is only limited amount of training data. Thus we expect regularization methods to show its superiority in low data setting. However, we find that state-of-the-art methods are not as effective as supposed. For a fair comparison, we follow the same hyperparameter settings in [37]. The backbone network is WideResNet-28-2. We first evaluate different methods on Cifar-10 with 250, 1000 and 4000 labels. Training images are sampled uniformly from 10 categories. We run each model on 5 random data splits and report the mean and standard deviation in Table 10. We observe that CutMix (p=0.5) and ShakeDrop even degrade the baseline model performance, especially when labels are very limited. CutMix mixes images and their labels, which introduces strong noises to the data and ground truth labels. This is effective when there is enough clean labels to learn a good baseline. But when the baseline is weak, this disturbance is too severe. We reduce the impact of CutMix by setting p=0.1, where CutMix is barely used during training. CutMix still harms the baseline when there are only 250 labels, but it becomes beneficial when there are 4000 labels. ShakeDrop has a similar trend with CutMix since it introduces noises to the structure. In contrast, GradAug significantly and consistently enhances the baseline in all cases because it generates self-guided augmentations to the baseline rather than noises. Moreover, GradAug can be easily extended to semi-supervised learning (SSL). We can leverage the full-network to generate labels for unlabeled data and use them to train the sub-networks. See the supplementary material for implementation details. Our GradAug-semi can further improve the performance over GradAug. It even achieves comparable performance with Mean Teacher [36], which is a popular SSL algorithm. We also evaluate the methods on STL-10 dataset [38]. The dataset is designed to test SSL algorithms, where the unlabeled data are sampled from a different distribution than labeled data. Similarly, CutMix and ShakeDrop are not effective while GradAug and GradAug-semi achieve clear improvements. 5 Conclusion In this paper, we propose GradAug which introduces self-guided augmentations to the network gradients during training. The method is easy to implement while being effective. It achieves a new state-of-the-art accuracy on ImageNet classification. The generalization ability is verified on COCO object detection and instance segmentation. GradAug is also robust to image corruption and adversarial attack. We further reveal that current state-of-the-art methods do not perform well in low data setting, while GradAug consistently enhances the baseline in all cases. Acknowledgments This work is partially supported by the National Science Foundation (NSF) under Grant No. 1910844 and NSF/Intel Partnership on MLWiNS under Grant No. 2003198. Broader Impact The proposed regularization method is a generic approach for deep neural networks training. Researchers in the machine learning and computer vision communities should benefit from this work. To the best of our knowledge, we don’t think this research will put anyone at disadvantage. All the experiments are based on the public datasets and follow the standard experimental settings. Thus the method does not leverage biases in the data.
1. What is the focus and contribution of the paper regarding deep neural networks? 2. What are the strengths of the proposed approach, particularly in its regularization technique? 3. What are the weaknesses of the method, especially regarding its efficiency and comparisons with other solutions? 4. How does the reviewer assess the effectiveness of the proposed method in various tasks and datasets? 5. Are there any concerns or suggestions regarding the sub-network sampling strategy or data augmentation choices?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a new regularization call GradAug to better train deep neural networks. The main contributions are: 1. A multi forward method of different data augmentations using different sub-networks is proposed. And the method is viewed as gradient augmentation technique by the authors. 2. Extensive experiments on image classification, object detection, instance segmentation, adversarial attack and low data setting are conducted to demonstrate the effectiveness of the proposed method. Strengths 1. The authors proposes to regularize neural nets by forward and backward the combination of multiple data augmentations. And different data augmentations go into different sub-networks, and the final sub-network sampling is simply keeping the first w (w\in[0,1]) percent of the total filters and outputs channels of each layer. The whole procedure is introduced as a special gradient augmentation. 2. The authors show the improvements of the proposed methods in many different tasks and datasets. Beyond the accuracy of recognition, detection and segmentation, the model robustness to adversarial samples are also improved. Weaknesses 1. The training time and memory cost could increase by several times. 2. Since the training is more time consuming, many other solutions such as mimicking / distilling based solutions are suggested to compare with. 3. Only a very simple sub-network sampling strategy is considered, what about randomly choosing a sub-network each time, or keeping the most part fixed and a some portion randomly chosen? 4. Only a very simple data augmentation (different input training size) is considered in the sub-network training. What about other choices? ====== post rebuttal ======= I misunderstood some important points in the paper in my original review comments. 1. The difference between GradAug and GradAug+. Now that I know that GradAug+ (GradAug with CutMix augmentations) does achieve STOA results on several tasks. For example, for ImageNet classification, the result of 79.6% acc of top 1 is STOA as I far as I know. 2. Experiments show that GradAug need less epochs to converge to a good result, and this alleviate my concern about the time cost to some extent. 3. The idea also seems to work good in the setting of stochastic depth.
NIPS
Title GradAug: A New Regularization Method for Deep Neural Networks Abstract We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy. By combining with CutMix, it further boosts the performance to 79.67%, which outperforms an ensemble of advanced training tricks. The generalization ability is evaluated on COCO object detection and instance segmentation where GradAug significantly surpasses other state-of-the-art methods. GradAug is also robust to image distortions and FGSM adversarial attacks and is highly effective in low data regimes. Code is available at https: //github.com/taoyang1122/GradAug N/A We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy. By combining with CutMix, it further boosts the performance to 79.67%, which outperforms an ensemble of advanced training tricks. The generalization ability is evaluated on COCO object detection and instance segmentation where GradAug significantly surpasses other state-of-the-art methods. GradAug is also robust to image distortions and FGSM adversarial attacks and is highly effective in low data regimes. Code is available at https: //github.com/taoyang1122/GradAug 1 Introduction Deep neural networks have achieved great success in computer vision tasks such as image classification [1, 2], image reconstruction [3, 4], object detection [5, 6] and semantic segmentation [7, 8]. But deep neural networks are often over-parameterized and easily suffering from over-fitting. Regularization [9, 10] and data augmentation [1, 11] are widely used techniques to alleviate the over-fitting problem. Many data-level regularization methods [10, 12, 13] have achieved promising performance in image classification. These methods are similar to data augmentation where they put constraints on the input images. Although effective in image classification, these methods are hard to apply to downstream tasks such as object detection and segmentation due to their special operations. For example, the state-of-the-art CutMix [13] can not be directly applied to object detection because first, mixing samples will destroy the semantics in images; second, it is hard to interpolate the labels in these tasks. Another category of regularization methods imposes constraints on the network structures. [14] proposes that adding noises to the network gradients can improve generalization. Other methods [9,15,16] randomly drop some connections in the network, which implicitly introduce random noises in the training process. These methods are usually more generic but not as effective as data-level regularization. In this paper, we introduce Gradient Augmentation (GradAug), which generates meaningful disturbances to the gradients by the network itself rather than just adding random noises. The idea is that when a random transformation (e.g., random rotation, random scale, random crop, etc.) is applied to an image, a well-generalized network should still recognize the transformed image as the same object. Different from the regular data augmentation technique which only regularizes the full-network, we regularize the representations learned by a set of sub-networks, which are randomly sampled 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. from the full network in terms of the network width (i.e., number of channels in each layer). Since the representation of the full network is composed of sub-networks’ representations due to weights sharing during the training, we expect sub-networks to learn different representations from different transformations, which will lead to a well-generalized and diversified full network representation. We conduct a comprehensive set of experiments to evaluate the proposed regularization method. Using a simple random scale transformation, GradAug can improve the ImageNet Top-1 accuracy of ResNet-50 from 76.32% to 78.79%, which is a new state-of-the-art accuracy. By leveraging a more powerful data augmentation technique – CutMix [13], we can further push the accuracy to 79.67%. The representation’s generalization ability is evaluated on COCO object detection and instance segmentation tasks (Section 4.4). Our ImageNet pretrained model alone can improve the baseline MaskRCNN-R50 by +1.2 box AP and +1.2 mask AP. When applying GradAug to the detection framework, it can outperform the baseline by +1.7 box AP and +2.1 mask AP. Moreover, we demonstrate that GradAug is robust to image corruptions and adversarial attacks (Section 4.5) and is highly effective in low data settings (Section 4.6). 2 Related Work Data augmentation. Data augmentation [1, 11, 17] increases the amount and diversity of training data by linear or non-linear transformations over the original data. In computer vision, it usually includes rotation, flipping, etc. Recently, a series of regularization methods use specially-designed operations on the input images to alleviate over-fitting in deep neural networks. These methods are similar to data augmentation. Cutout [10] randomly masks out a squared region on the image to force the network to look at other image context. Dropblock [18] shares a similar idea with Cutout but it drops a region in the feature maps. Although they have achieved improvements over the regular data augmentation, such region dropout operations may lose information about the original images. Mixup [12] mixes two samples by linearly interpolating both the images and labels. CutMix [13] combines Cutout and Mixup to replace a squared region with a patch from another image. Other mixed sample variants [19, 20] all share similar ideas. While effective in image classification, the mixed sample augmentation is not natural to be applied to tasks such as detection and segmentation due to semantic and label ambiguities. In contrast, the proposed GradAug is a task-agnostic approach which leverages the most common image transformations to regularize sub-networks. This allows the method to be directly applied to different vision tasks and easily amenable for other applications. Structure regularization. Another category of regularization methods imposes constraints on the network weights and structure to reduce over-fitting. [14] points out that adding random noises to the gradients during training can help the network generalize better. Dropout [9] randomly drops some connections during training to prevent units from co-adapting. The random dropping operation also implicitly introduces random noises into the training process. Many following works share the idea of Dropout by randomly dropping network layers or branches. Shake-Shake [21] assigns random weights to residual branches to disturb the forward and backward passes. But it is limited to three-branch architectures. ShakeDrop [22] extends Shake-Shake to two-branch architectures (e.g., ResNet [2] and PyramidNet [23]). However, its application is still limited. [15] randomly drops a subset of layers during training. The final network can be viewed as an ensemble of many shallow networks. Although these methods have shown improvements on image classification, they are usually not as effective as data-level regularization strategies. Moreover, their generalization and effectiveness are not validated on other tasks. GradAug leverages the advantages of both categories of methods. It uses different augmentations to regularize a set of sub-networks generated from the full network in the joint training process. This introduces self-guided disturbances to the gradients of the full network rather than adding random noises. The method is more effective and generic than previous techniques. 3 GradAug 3.1 Algorithm When applying some random transformations to an image, human can still recognize it as the same object. We expect deep neural networks to have the same generalization ability. GradAug aims to regularize sub-networks with differently transformed training samples. There are various of methods to generate sub-networks during training. Previous works [9, 15, 16] usually stochastically drop some neurons, layers or paths. In GradAug, we expect the final full-network to take advantage of the learned representations of the sub-networks. Therefore, we sample sub-networks in a more structured manner, that is by the network width. We define θ as the model parameter. Without loss of generality, we use convolutional layers for illustration, then θ ∈ Rc1×c2×k×k, where c1 and c2 are number of input and output channels, k is the convolution kernel size. We define the width of a sub-network as w ∈ [α, 1.0], where α is the width lower bound. The weights of the sub-network is θw. Different from random sampling, we always sample the first w × 100% channels of the full-network and the sub-network weights are θw ∈ Rwc1×wc2×k×k. In this way, a larger sub-network always share the representations of a smaller sub-network in a weights-sharing training fashion, so it can leverage the representations learned in smaller sub-networks. Iteratively, sub-networks can construct a full-network with diversified representations. Figure 1 shows the class activation maps (CAM) [24] of the sub-network and full-network. The full-network pays attention to several regions of the object because it can leverage the representation of the sub-network. For example, when the sub-network (w = 0.9) focuses on one dog in the image, the full-network shares this attention and uses the other network part to capture the information of another dog. Therefore, the full-network learns richer semantic information in the image, while the baseline model only models a single region and does not fully comprehend the salient information of the image. To make the method simple and generic, we choose among the most commonly used transformations such as random scales, random rotations, random crops, etc. In the experiments, we show that a simple random scale transformation can already achieve state-of-the-art performance on image classification, and it can be directly applied to other applications. Moreover, we can use more powerful augmentations such as CutMix for further enhanced performance. Training procedure. The training procedure of GradAug is very similar to the regular network training. In each training iteration, we train the full-network with the original images, which is the same as the regular training process. Then we additionally sample n sub-networks and train them with randomly transformed images. Finally, we accumulate the losses of full-network and sub-networks to update the model weights. This naive training approach achieves good training accuracy but the testing accuracy is very low. This is caused by the batch normalization (BN) [25] layers. The BN layer will collect a moving average of training batches’ means and variances during training. The collected mean and variance will be used during inference. However, the batch mean and variance in the sub-networks can be very different from those in the full-network because the training samples are randomly transformed. This will cause the final BN mean and variance to be inappropriate for the full-network during inference. But in the training phase, BN uses the mean and variance of the current batch, so the training behaves normally. To obtain the correct BN statistics for the full-network, we do not update BN mean and variance when training the sub-networks. Only the full-network is allowed to collect these statistics. However, the weights in BN layer are still updated by sub-networks because they can be shared with full-network. To further improve the performance, we also leverage two training tricks in [26]. First, we use the output of the full-network as soft labels to train the sub-networks. Second, we always sample the smallest sub-network (i.e., w = α) during training if n > 1. The effect of these two training tricks is provided in the supplementary material. The Pytorch-style pseudo-code of GradAug is presented in Algorithm 1. 3.2 Analysis of gradient property We provide an in-depth analysis of GradAug from the perspective of gradient flow. For simplicity, we consider a fully connected network with 1-D training samples. We define the network as N . The parameter of one layer in the full-network is θ ∈ Rc1×c2 . The parameter of sub-networks is θw as explained in Section 3.1. x ∈ Rd is the training sample and y is its label. The output of the network is denoted as N(θ, x), and the training loss is l(N(θ, x), y) where l is the loss function, which is often the cross entropy in image classification. The loss and gradients in a standard training process Algorithm 1 Gradient Augmentation (GradAug) Input: Network Net. Training image img. Random transformation T . Number of sub-networks n. Subnetwork width lower bound α. . Train full-network. Forward pass, outputf = Net(img) Compute loss, lossf = criterion(output, target) . Regularize sub-networks. for i in range(n) do Sample a sub-network, subneti = Sample(Net, α) Fix BN layer’s mean and variance, subneti.track_running_stats = False Forward pass with transformed images, outputi = subneti(T i(img)) Compute loss with soft labels, lossi = criterion(outputi, outputf ) end for Compute total loss, L = lossf + ∑n i=1 lossi Compute gradients and do backward pass are computed as Lstd = l(N(θ, x), y), gstd = ∂Lstd ∂θ , (1) where gstd ∈ Rc1×c2 . Structure regularization methods [9, 15, 16] randomly drop some connections in the network, and their loss and gradients can be computed as Lsr = l(N(θrand, x), y), gsr = ∂Lsr ∂θrand . (2) We can view gsr has the same shape as gstd where the gradients of disabled connections are 0. Therefore, we can rewrite gsr as gsr = gstd + gnoise, (3) where gnoise ∈ Rc1×c2 is a random matrix which introduces some random disturbances to the gradients. In contrast, GradAug applies more meaningful disturbances to the gradients. Let T be the random transformation operation (e.g., random scale, random rotation, etc.) and T i be the transformation to sub-network i (i = [1, ..., n]). The loss and gradients are computed as: LGA = l(N(θ, x), y) + n∑ i=1 l(N(θwi , T i(x)), N(θ, x)) gGA = ∂l(N(θ, x), y) ∂θ + n∑ i=1 ∂l(N(θwi , T i(x)), N(θ, x)) ∂θwi = gstd + g ′. (4) gGA has a similar form with gsr. The first term is the same as the gradients in standard training. But the second term g′ is derived by the sub-networks with transformed training samples. Since sub-networks are part of the full-network, we call this term “self-guided”. It reinforces good descent directions, leading to improved performance and faster convergence. g′ can be viewed as an augmentation to the raw gradients gstd. It allows different parts of the network to learn diverse representations. The gradients of data-level regularization methods are similar to gstd, with the difference only in the training sample. The gradients are gdr = ∂l(N(θ, f(x)), y) ∂θ , (5) where f is the augmentation method such as CutMix. GradAug can also leverage these augmentations by applying them to the original samples and then following random transformations. The gradients become gGA = ∂l(N(θ, f(x)), y) ∂θ + n∑ i=1 ∂l(N(θwi , T i(f(x))), N(θ, f(x))) ∂θwi = gdr + g ′. (6) g′ is still an augmentation to gdr. Data augmentation can also be combined with other structure regularization methods. However, similar to the derivations in Eq. 2 and Eq. 3, such combination strategy introduces random noises to gdr, which is not as effective as GradAug as shown in Table 3. 4 Experiments We first evaluate the effectiveness of GradAug on image classification. Next, we show the generalization ability of GradAug on object detection and instance segmentation. Finally, we demonstrate that GradAug can improve the model’s robustness to image distortions and adversarial attacks. We also show GradAug is effective in low data settings and can be extended to semi-supervised learning. 4.1 ImageNet classification Implementation details. ImageNet [27] dataset contains 1.2 million training images and 50,000 validation images in 1000 categories. We follow the same data augmentations in [13] to have a fair comparison. On ResNet-50, we train the model for 120 epochs with a batch size of 512. The initial learning rate is 0.2 with cosine decay schedule. We sample n = 3 sub-networks in each training iteration and the width lower bound is α = 0.9. For simplicity, we only use random scale transformation for sub-networks. That is the input images are randomly resized to one of {224× 224, 192× 192, 160× 160, 128× 128}. Note that we report the final-epoch accuracy rather than the highest accuracy in the whole training process as is reported in CutMix [13]. We evaluate GradAug and several popular regularization methods on the widely used ResNet-50 [2]. The results are shown in Table 1. GradAug achieves a new state-of-the-art performance of 78.79% based on ResNet-50. Specifically, GradAug significantly outperforms the structure regularization methods by more than 1 point. As illustrated in Eq. 3 and Eq. 4, GradAug has a similar form with structure regularization. The difference is that GradAug introduces self-guided disturbances to augment the raw gradients. The large improvement over the structure regularization methods clearly validates the effectiveness of our proposed method. As shown in Eq. 6, GradAug can be seamlessly combined with data augmentation. We combine GradAug with CutMix (p=0.5) and denote this method as GradAug†. We compare GradAug† with bag of tricks [28] at the bottom of Table 1. It is evident that GradAug† outperforms bag of tricks both in model complexity and accuracy. Note that bag of tricks includes a host of advanced techniques such as model tweaks, training refinements, label smoothing, knowledge distillation, Mixup augmentation, etc., while GradAug is as easy as regular model training. Due to the sub-networks in GradAug training, one natural question arises: Would the training cost of GradAug increase significantly? As stated in [13], typical regularization methods [12, 13, 18] require more training epochs to converge, while GradAug converges with less epochs. Thus the total training time is comparable. The memory cost is also comparable because sub-networks do forward and back-propagation one by one, and only their gradients are accumulated to update the weights. Table 2 shows the comparison on ImageNet. The training cost is measured on an 8× 1080Ti GPU server with a batch size of 512. We can see that the training time of GradAug is comparable with state-of-the-art regularization methods such as CutMix. 4.2 Cifar classification Implementation details. We also evaluate GradAug on Cifar-100 dataset [29]. The dataset has 50,000 images for training and 10,000 images for testing in 100 categories. We choose WideResNet [30] and PyramidNet [23] structures as they achieve state-of-the-art performance on Cifar dataset. We follow the training setting in [23,30] in our experiments. For WideResNet, we train the model for 200 epochs with a batch size of 128. The initial learning rate is 0.1 with cosine decay schedule. Weight decay is 0.0005. PyramidNet is trained for 300 epochs with a batch size of 64. The initial learning rate is 0.25 and decays by a factor of 0.1 at 150 and 225 epochs. Weight decay is 0.0001. We use random scale transformation where input images are resized to one of {32× 32, 28× 28, 24× 24}. The number of sub-networks is n = 3 and the width lower bound is α = 0.8. The results are compared in Table 3. GradAug is comparable with the state-of-the-art CutMix, and it clearly outperforms the best structure regularization method ShakeDrop, which validate the effectiveness of the self-guided augmentation to the raw gradients. We further illustrate this by comparing GradAug† with CutMix + ShakeDrop. On WideResNet, ShakeDrop severely degrades the Top-1 accuracy of CutMix by 2.44%, while GradAug consistently improves CutMix by more than 1 point. The reason is that ShakeDrop introduces random noises to the training process, which is unstable and ineffective in some cases. However, GradAug is a self-guided augmentation to the gradients, which makes it compatible with various structures and data augmentations. 4.3 Ablation study We study the contribution of random width sampling and random transformation to the performance, respectively. We also show the impact of the number of sub-networks n and the width lower bound α. The experiments are conducted on Cifar-100 based on the WideResNet-28-10 backbone. Random width sampling and random transformation. We study the effect of one component by abandoning the other one. First, we do not randomly sample sub-networks. Then GradAug becomes multi-scale training in our experiments. In each iteration, we feed different scaled images to the network. Second, we do not conduct random scale transformation. In each iteration, we sample 3 sub-networks and feed them with the original images. The results are shown in Table 4. Random scale and random width sampling only achieve marginal improvements over the baseline, but GradAug remarkably enhances the baseline (+2.43%). This reaffirms the effectiveness of our method, which unifies data augmentation and structure regularization in the same framework for better performance. Number of sub-networks and width lower bound. There are two hyperparameters in GradAug, the number of sub-networks n and sub-network width lower bound α. We first explore the effect of n. Other settings are the same as Section 4.2. The results are shown in Figure 2. A larger n tends to achieve higher performance since it involves more self-guided gradient augmentations. The accuracy plateaus when n ≥ 3. Note that even one sub-network can significantly improve the baseline. Then we investigate the impact of width lower bound α by fixing other settings. As shown in Figure 2, α = 0.8 achieves the best accuracy, but all the values clearly outperform the baseline. GradAug is not sensitive to these hyperparameters. Empirically, we can set n ≥ 3 and α ∈ [0.7, 0.9]. Effect of different transformations. As shown in experiments above, GradAug is very effective when leveraging random scale transformation and CutMix. Here we further explore other transformations, including random rotation transformation and the combination of random scale and rotation transformations. We conduct the experiments on WideResNet-28-10 and ResNet-50 following the settings above. For random rotation, we randomly rotate the images by a degree of {0◦, 90◦, 180◦, 270◦}. For the combination, the input images are first randomly rotated and then randomly resized. The results are shown in Table 5. It is clear that both transformations (random scale and random rotation) and their combination achieve significant improvements over the baseline. This validates our idea of regularizing sub-networks by different transformed images. Generating sub-networks by stochastic depth. In the experiments above, we generate subnetworks by cutting the network width. Similarly, we can generate sub-network by shrinking the network depth. We follow StochDepth [15] to randomly drop some layers during training. The training settings are the same as [15] and we use random scale transformation to regularize subnetworks. As shown in Table 6, GradAug significantly outperforms the baseline and StochDepth. This demonstrates that GradAug can be generalized to depth-shortened sub-networks and again verifies the effectiveness of our idea. 4.4 Object detection and instance segmentation To evaluate the generalization ability of the learned representations by GradAug, we finetune its ImageNet pretrained model for COCO [31] object detection and instance segmentation. The experiments are based on Mask-RCNN-FPN [6, 32] framework and MMDetection toolbox [33] on ResNet-50 backbone. Mixup and CutMix, two most effective methods in image classification, are employed for comparison. As explained in Section 2, Mixup and CutMix are mixed sample data augmentation methods, which can not be applied to object detection and segmentation. Therefore, we compare these methods by directly finetuning their ImageNet pretrained models on COCO dataset. All models are trained with 1× schedule on COCO dataset. The image resolution is 1000 × 600. The mean Average Precision (AP at IoU=0.50:0.05:0.95) is reported in Table 7. We can see that although Mixup and CutMix achieve large improvements on ImageNet classification, the learned representations can barely benefit object detection and segmentation. In contrast, GradAug-pretrained model considerably improves the performance of Mask-RCNN. This validates that GradAug enables the model to learn well-generalized representations which transfer well to other tasks. Moreover, the training procedure of GradAug can be directly applied to the detection framework. The result (last line of Table 7) shows that it further boosts the performance as compared with GradAug-pretrained and can significantly improve the baseline by +1.7 det mAP and +2.1 seg mAP. The implementation details and qualitative results are in the supplementary material. 4.5 Model robustness Deep neural networks are easily fooled by unrecognizable changes on input images. Developing robust machine learning models is pivotal for safety-critical applications. In this section, we evaluate the model robustness to two kinds of permutations, image corruptions and adversarial attacks. Image corruption. ImageNet-C dataset [34] is created by introducing a set of 75 common visual corruptions to ImageNet classification. ImageNet-C has 15 types of corruptions drawn from four categories (noise, blur, weather and digital). Each type of corruption has 5 levels of severity. Corruptions are applied to validation set only. Models trained on clean ImageNet should be tested on the corrupted validation set without retraining. We follow the evaluation metrics in [34] to test ResNet-50 trained by different regularization methods. The mean corruption error (mCE) is reported in Table 8. Mixup has lower mCE than other methods. We conjecture the reason is that Mixup proportionally combines two samples, which is in a similar manner to the generation of corrupted images. GradAug outperforms the second best competing method CutMix by 1.4%. Note that GradAug can also be combined with Mixup and we denote it as GradAug*. The results in Table 8 reveal that GradAug* further improves Mixup and achieves the lowest mCE. This demonstrates that GradAug is capable of leveraging the advantages of different augmentations. Adversarial attack. We also evaluate model robustness to adversarial samples. Different from image corruption, adversarial attack uses a small distortion which is carefully crafted to confuse a classifier. We use Fast Gradient Sign Method (FGSM) [35] to generate adversarial distortions and conduct white-box attack to ResNet-50 trained by different methods. The classification accuracy on adversarially attacked ImageNet validation set is reported in Table 9. Note that here Mixup is not as robust as to image corruptions, which validates our aforementioned conjecture in the image corruption experiment. GradAug and CutMix are comparable and both significantly outperform other methods. GradAug† further gains improvements over GradAug and CutMix, manifesting superiority of our self-guided gradient augmentation. 4.6 Low data setting Deep neural network models suffer from more severe over-fitting when there is only limited amount of training data. Thus we expect regularization methods to show its superiority in low data setting. However, we find that state-of-the-art methods are not as effective as supposed. For a fair comparison, we follow the same hyperparameter settings in [37]. The backbone network is WideResNet-28-2. We first evaluate different methods on Cifar-10 with 250, 1000 and 4000 labels. Training images are sampled uniformly from 10 categories. We run each model on 5 random data splits and report the mean and standard deviation in Table 10. We observe that CutMix (p=0.5) and ShakeDrop even degrade the baseline model performance, especially when labels are very limited. CutMix mixes images and their labels, which introduces strong noises to the data and ground truth labels. This is effective when there is enough clean labels to learn a good baseline. But when the baseline is weak, this disturbance is too severe. We reduce the impact of CutMix by setting p=0.1, where CutMix is barely used during training. CutMix still harms the baseline when there are only 250 labels, but it becomes beneficial when there are 4000 labels. ShakeDrop has a similar trend with CutMix since it introduces noises to the structure. In contrast, GradAug significantly and consistently enhances the baseline in all cases because it generates self-guided augmentations to the baseline rather than noises. Moreover, GradAug can be easily extended to semi-supervised learning (SSL). We can leverage the full-network to generate labels for unlabeled data and use them to train the sub-networks. See the supplementary material for implementation details. Our GradAug-semi can further improve the performance over GradAug. It even achieves comparable performance with Mean Teacher [36], which is a popular SSL algorithm. We also evaluate the methods on STL-10 dataset [38]. The dataset is designed to test SSL algorithms, where the unlabeled data are sampled from a different distribution than labeled data. Similarly, CutMix and ShakeDrop are not effective while GradAug and GradAug-semi achieve clear improvements. 5 Conclusion In this paper, we propose GradAug which introduces self-guided augmentations to the network gradients during training. The method is easy to implement while being effective. It achieves a new state-of-the-art accuracy on ImageNet classification. The generalization ability is verified on COCO object detection and instance segmentation. GradAug is also robust to image corruption and adversarial attack. We further reveal that current state-of-the-art methods do not perform well in low data setting, while GradAug consistently enhances the baseline in all cases. Acknowledgments This work is partially supported by the National Science Foundation (NSF) under Grant No. 1910844 and NSF/Intel Partnership on MLWiNS under Grant No. 2003198. Broader Impact The proposed regularization method is a generic approach for deep neural networks training. Researchers in the machine learning and computer vision communities should benefit from this work. To the best of our knowledge, we don’t think this research will put anyone at disadvantage. All the experiments are based on the public datasets and follow the standard experimental settings. Thus the method does not leverage biases in the data.
1. What are the main contributions and strengths of the paper regarding the proposed GradAug method? 2. What are the weaknesses of the paper, particularly in terms of validation and comparison with other techniques? 3. Do you have any concerns about the novelty of the approach, especially in relation to training larger networks and compressing them? 4. How does the reviewer assess the explanation and evidence provided by the authors regarding the operating mechanism behind GradAug? 5. Are there any questions or concerns regarding the applicability and effectiveness of the 'self-guided gradient augmentation' technique? 6. How does the reviewer evaluate the thoroughness and adequacy of the comparisons made in the paper, especially with respect to other regularization techniques? 7. Are there any questions or concerns regarding the performance cost and memory usage of the GradAug method?
Summary and Contributions Strengths Weaknesses
Summary and Contributions After rebuttal and discussion with other reviewers I have updated my score. However, I do point out several concerns of mine which the authors could consider further validation for: It's good that the authors performed the time/memory comparison in the rebuttal as that was a significant concern of mine. My concerns mostly revolve around what other techniques should we compare this against? Given that this algorithm takes 3-4x the time with comparison to the baseline, I could for example: 1: Train a much larger network and then use compression techniques to slim it to the same size. 2: Train an even larger network at a lower quantization then use compression. 3: Train a larger network with competing regularization techniques (e.g. Mixup which is still ~70% faster. see rebuttal) then use compression. Is it sufficiently true that this approach is in fact *not* aliasing the idea of training a larger network, then compressing it? Is it sufficiently true that the novelty of the approach (i.e. subnetwork learning) cannot be captured by training a larger network, then compressing? These questions would be nice to be answered with convincing validation, and my score would certainly be higher if they were validated. This paper proposes a novel form of regularization dubbed 'GradAug'. GradAug consists of two components. One is structured subsampling of neural networks similar to dropout. A second is a 'self guided gradient augmentation' technique is also used. GradAug iterates upon previous work such as dropout/zoneout/freezeout to improve the robustness of neural networks by incorporating training them as 'ensembles' of smaller subnetworks. Some insight is provided into the underlying operating mechanism behind GradAug. Validation is also performed on GradAug demonstrating improved test performance on CIFAR10/Imagenet. The general applicability of the technique is demonstrated by testing on other image tasks. Finally, it is demonstrated that GradAug improves the robustness of the network to adversarial attacks and image corruption. Strengths The empirical results shown by the approach are quite strong, and certainly demonstrate the possible validity of the approach. There is some amount of novelty in the approach, however it builds upon previous work considering a network as a an ensemble of smaller subnetworks (i.e. dropout et. al.). Weaknesses There remain several weaknesses in the paper, mostly centering around thoroughness of validation, and inadequate comparison to other (semi-related) work. In particular I found the explanation as to how the two proposed approaches in GradAug. It is not sufficiently explained (e.g. in 3.2) *how* the structured subnetwork sampling, as well as the 'self guided gradient augmentation' aid in building a robust network. Although I agree the empirical results are impressive, I'm not sure how this approach works outside of imagining it as some sort of 'Dropout, but better,' technique. From what I can tell the 'self guided gradient augmentation' prefers networks which are robust (i.e. preferring little to no change in the output, given scaling/transformations according to Eq. 4) to certain types of transformations (e.g. scaling, rotation, translation etc.). This should improve robustness, however the authors claim this is some sort of 'self guiding'. It's not exactly clear what is meant by this. The authors need to better explain exactly what is hoped to be achieved by the self guiding, and also to furnish evidence through empirical experiments, or perhaps proof/symbolic intuition. It is also not fully well explored how the structured subnetwork training improves upon Dropout, and how it aids in training and regularizing networks. The authors mention a 'a larger sub-network always share the representations of a smaller sub-network in a weights-sharing training fashion, so it can leverage the representations learned in smaller sub-networks.' What is precisely meant by this? I'm not sure I understand, or believe that such an assertion is true. This is not elaborated upon in later sections or empirically in the validation. Two additional weaknesses in the paper are a lack of mention/evaluation on the performance cost of GradAug. From what I can tell, GradAug requires several forward passes for each step of training. How significantly does this affect memory/training time? This would have been helpful to help evaluate this work, as typical regularization methods are usually cheap to apply. In the proposed approach, several forward passes are made using subnetworks to provide gradient for a single backward pass. This can be thought of as distilling the knowledge of a more expressive, powerful network into a smaller one. In this way this approach can be viewed somewhat analogous to neural network compression. Although this comparison is not perfect, it possibly deserves further comparison. Overall, this paper, though shows strong empirical results, has not fully or well explored its proposed approach. Due to this reason, I don't believe it is ready for publication.
NIPS
Title GradAug: A New Regularization Method for Deep Neural Networks Abstract We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy. By combining with CutMix, it further boosts the performance to 79.67%, which outperforms an ensemble of advanced training tricks. The generalization ability is evaluated on COCO object detection and instance segmentation where GradAug significantly surpasses other state-of-the-art methods. GradAug is also robust to image distortions and FGSM adversarial attacks and is highly effective in low data regimes. Code is available at https: //github.com/taoyang1122/GradAug N/A We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy. By combining with CutMix, it further boosts the performance to 79.67%, which outperforms an ensemble of advanced training tricks. The generalization ability is evaluated on COCO object detection and instance segmentation where GradAug significantly surpasses other state-of-the-art methods. GradAug is also robust to image distortions and FGSM adversarial attacks and is highly effective in low data regimes. Code is available at https: //github.com/taoyang1122/GradAug 1 Introduction Deep neural networks have achieved great success in computer vision tasks such as image classification [1, 2], image reconstruction [3, 4], object detection [5, 6] and semantic segmentation [7, 8]. But deep neural networks are often over-parameterized and easily suffering from over-fitting. Regularization [9, 10] and data augmentation [1, 11] are widely used techniques to alleviate the over-fitting problem. Many data-level regularization methods [10, 12, 13] have achieved promising performance in image classification. These methods are similar to data augmentation where they put constraints on the input images. Although effective in image classification, these methods are hard to apply to downstream tasks such as object detection and segmentation due to their special operations. For example, the state-of-the-art CutMix [13] can not be directly applied to object detection because first, mixing samples will destroy the semantics in images; second, it is hard to interpolate the labels in these tasks. Another category of regularization methods imposes constraints on the network structures. [14] proposes that adding noises to the network gradients can improve generalization. Other methods [9,15,16] randomly drop some connections in the network, which implicitly introduce random noises in the training process. These methods are usually more generic but not as effective as data-level regularization. In this paper, we introduce Gradient Augmentation (GradAug), which generates meaningful disturbances to the gradients by the network itself rather than just adding random noises. The idea is that when a random transformation (e.g., random rotation, random scale, random crop, etc.) is applied to an image, a well-generalized network should still recognize the transformed image as the same object. Different from the regular data augmentation technique which only regularizes the full-network, we regularize the representations learned by a set of sub-networks, which are randomly sampled 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. from the full network in terms of the network width (i.e., number of channels in each layer). Since the representation of the full network is composed of sub-networks’ representations due to weights sharing during the training, we expect sub-networks to learn different representations from different transformations, which will lead to a well-generalized and diversified full network representation. We conduct a comprehensive set of experiments to evaluate the proposed regularization method. Using a simple random scale transformation, GradAug can improve the ImageNet Top-1 accuracy of ResNet-50 from 76.32% to 78.79%, which is a new state-of-the-art accuracy. By leveraging a more powerful data augmentation technique – CutMix [13], we can further push the accuracy to 79.67%. The representation’s generalization ability is evaluated on COCO object detection and instance segmentation tasks (Section 4.4). Our ImageNet pretrained model alone can improve the baseline MaskRCNN-R50 by +1.2 box AP and +1.2 mask AP. When applying GradAug to the detection framework, it can outperform the baseline by +1.7 box AP and +2.1 mask AP. Moreover, we demonstrate that GradAug is robust to image corruptions and adversarial attacks (Section 4.5) and is highly effective in low data settings (Section 4.6). 2 Related Work Data augmentation. Data augmentation [1, 11, 17] increases the amount and diversity of training data by linear or non-linear transformations over the original data. In computer vision, it usually includes rotation, flipping, etc. Recently, a series of regularization methods use specially-designed operations on the input images to alleviate over-fitting in deep neural networks. These methods are similar to data augmentation. Cutout [10] randomly masks out a squared region on the image to force the network to look at other image context. Dropblock [18] shares a similar idea with Cutout but it drops a region in the feature maps. Although they have achieved improvements over the regular data augmentation, such region dropout operations may lose information about the original images. Mixup [12] mixes two samples by linearly interpolating both the images and labels. CutMix [13] combines Cutout and Mixup to replace a squared region with a patch from another image. Other mixed sample variants [19, 20] all share similar ideas. While effective in image classification, the mixed sample augmentation is not natural to be applied to tasks such as detection and segmentation due to semantic and label ambiguities. In contrast, the proposed GradAug is a task-agnostic approach which leverages the most common image transformations to regularize sub-networks. This allows the method to be directly applied to different vision tasks and easily amenable for other applications. Structure regularization. Another category of regularization methods imposes constraints on the network weights and structure to reduce over-fitting. [14] points out that adding random noises to the gradients during training can help the network generalize better. Dropout [9] randomly drops some connections during training to prevent units from co-adapting. The random dropping operation also implicitly introduces random noises into the training process. Many following works share the idea of Dropout by randomly dropping network layers or branches. Shake-Shake [21] assigns random weights to residual branches to disturb the forward and backward passes. But it is limited to three-branch architectures. ShakeDrop [22] extends Shake-Shake to two-branch architectures (e.g., ResNet [2] and PyramidNet [23]). However, its application is still limited. [15] randomly drops a subset of layers during training. The final network can be viewed as an ensemble of many shallow networks. Although these methods have shown improvements on image classification, they are usually not as effective as data-level regularization strategies. Moreover, their generalization and effectiveness are not validated on other tasks. GradAug leverages the advantages of both categories of methods. It uses different augmentations to regularize a set of sub-networks generated from the full network in the joint training process. This introduces self-guided disturbances to the gradients of the full network rather than adding random noises. The method is more effective and generic than previous techniques. 3 GradAug 3.1 Algorithm When applying some random transformations to an image, human can still recognize it as the same object. We expect deep neural networks to have the same generalization ability. GradAug aims to regularize sub-networks with differently transformed training samples. There are various of methods to generate sub-networks during training. Previous works [9, 15, 16] usually stochastically drop some neurons, layers or paths. In GradAug, we expect the final full-network to take advantage of the learned representations of the sub-networks. Therefore, we sample sub-networks in a more structured manner, that is by the network width. We define θ as the model parameter. Without loss of generality, we use convolutional layers for illustration, then θ ∈ Rc1×c2×k×k, where c1 and c2 are number of input and output channels, k is the convolution kernel size. We define the width of a sub-network as w ∈ [α, 1.0], where α is the width lower bound. The weights of the sub-network is θw. Different from random sampling, we always sample the first w × 100% channels of the full-network and the sub-network weights are θw ∈ Rwc1×wc2×k×k. In this way, a larger sub-network always share the representations of a smaller sub-network in a weights-sharing training fashion, so it can leverage the representations learned in smaller sub-networks. Iteratively, sub-networks can construct a full-network with diversified representations. Figure 1 shows the class activation maps (CAM) [24] of the sub-network and full-network. The full-network pays attention to several regions of the object because it can leverage the representation of the sub-network. For example, when the sub-network (w = 0.9) focuses on one dog in the image, the full-network shares this attention and uses the other network part to capture the information of another dog. Therefore, the full-network learns richer semantic information in the image, while the baseline model only models a single region and does not fully comprehend the salient information of the image. To make the method simple and generic, we choose among the most commonly used transformations such as random scales, random rotations, random crops, etc. In the experiments, we show that a simple random scale transformation can already achieve state-of-the-art performance on image classification, and it can be directly applied to other applications. Moreover, we can use more powerful augmentations such as CutMix for further enhanced performance. Training procedure. The training procedure of GradAug is very similar to the regular network training. In each training iteration, we train the full-network with the original images, which is the same as the regular training process. Then we additionally sample n sub-networks and train them with randomly transformed images. Finally, we accumulate the losses of full-network and sub-networks to update the model weights. This naive training approach achieves good training accuracy but the testing accuracy is very low. This is caused by the batch normalization (BN) [25] layers. The BN layer will collect a moving average of training batches’ means and variances during training. The collected mean and variance will be used during inference. However, the batch mean and variance in the sub-networks can be very different from those in the full-network because the training samples are randomly transformed. This will cause the final BN mean and variance to be inappropriate for the full-network during inference. But in the training phase, BN uses the mean and variance of the current batch, so the training behaves normally. To obtain the correct BN statistics for the full-network, we do not update BN mean and variance when training the sub-networks. Only the full-network is allowed to collect these statistics. However, the weights in BN layer are still updated by sub-networks because they can be shared with full-network. To further improve the performance, we also leverage two training tricks in [26]. First, we use the output of the full-network as soft labels to train the sub-networks. Second, we always sample the smallest sub-network (i.e., w = α) during training if n > 1. The effect of these two training tricks is provided in the supplementary material. The Pytorch-style pseudo-code of GradAug is presented in Algorithm 1. 3.2 Analysis of gradient property We provide an in-depth analysis of GradAug from the perspective of gradient flow. For simplicity, we consider a fully connected network with 1-D training samples. We define the network as N . The parameter of one layer in the full-network is θ ∈ Rc1×c2 . The parameter of sub-networks is θw as explained in Section 3.1. x ∈ Rd is the training sample and y is its label. The output of the network is denoted as N(θ, x), and the training loss is l(N(θ, x), y) where l is the loss function, which is often the cross entropy in image classification. The loss and gradients in a standard training process Algorithm 1 Gradient Augmentation (GradAug) Input: Network Net. Training image img. Random transformation T . Number of sub-networks n. Subnetwork width lower bound α. . Train full-network. Forward pass, outputf = Net(img) Compute loss, lossf = criterion(output, target) . Regularize sub-networks. for i in range(n) do Sample a sub-network, subneti = Sample(Net, α) Fix BN layer’s mean and variance, subneti.track_running_stats = False Forward pass with transformed images, outputi = subneti(T i(img)) Compute loss with soft labels, lossi = criterion(outputi, outputf ) end for Compute total loss, L = lossf + ∑n i=1 lossi Compute gradients and do backward pass are computed as Lstd = l(N(θ, x), y), gstd = ∂Lstd ∂θ , (1) where gstd ∈ Rc1×c2 . Structure regularization methods [9, 15, 16] randomly drop some connections in the network, and their loss and gradients can be computed as Lsr = l(N(θrand, x), y), gsr = ∂Lsr ∂θrand . (2) We can view gsr has the same shape as gstd where the gradients of disabled connections are 0. Therefore, we can rewrite gsr as gsr = gstd + gnoise, (3) where gnoise ∈ Rc1×c2 is a random matrix which introduces some random disturbances to the gradients. In contrast, GradAug applies more meaningful disturbances to the gradients. Let T be the random transformation operation (e.g., random scale, random rotation, etc.) and T i be the transformation to sub-network i (i = [1, ..., n]). The loss and gradients are computed as: LGA = l(N(θ, x), y) + n∑ i=1 l(N(θwi , T i(x)), N(θ, x)) gGA = ∂l(N(θ, x), y) ∂θ + n∑ i=1 ∂l(N(θwi , T i(x)), N(θ, x)) ∂θwi = gstd + g ′. (4) gGA has a similar form with gsr. The first term is the same as the gradients in standard training. But the second term g′ is derived by the sub-networks with transformed training samples. Since sub-networks are part of the full-network, we call this term “self-guided”. It reinforces good descent directions, leading to improved performance and faster convergence. g′ can be viewed as an augmentation to the raw gradients gstd. It allows different parts of the network to learn diverse representations. The gradients of data-level regularization methods are similar to gstd, with the difference only in the training sample. The gradients are gdr = ∂l(N(θ, f(x)), y) ∂θ , (5) where f is the augmentation method such as CutMix. GradAug can also leverage these augmentations by applying them to the original samples and then following random transformations. The gradients become gGA = ∂l(N(θ, f(x)), y) ∂θ + n∑ i=1 ∂l(N(θwi , T i(f(x))), N(θ, f(x))) ∂θwi = gdr + g ′. (6) g′ is still an augmentation to gdr. Data augmentation can also be combined with other structure regularization methods. However, similar to the derivations in Eq. 2 and Eq. 3, such combination strategy introduces random noises to gdr, which is not as effective as GradAug as shown in Table 3. 4 Experiments We first evaluate the effectiveness of GradAug on image classification. Next, we show the generalization ability of GradAug on object detection and instance segmentation. Finally, we demonstrate that GradAug can improve the model’s robustness to image distortions and adversarial attacks. We also show GradAug is effective in low data settings and can be extended to semi-supervised learning. 4.1 ImageNet classification Implementation details. ImageNet [27] dataset contains 1.2 million training images and 50,000 validation images in 1000 categories. We follow the same data augmentations in [13] to have a fair comparison. On ResNet-50, we train the model for 120 epochs with a batch size of 512. The initial learning rate is 0.2 with cosine decay schedule. We sample n = 3 sub-networks in each training iteration and the width lower bound is α = 0.9. For simplicity, we only use random scale transformation for sub-networks. That is the input images are randomly resized to one of {224× 224, 192× 192, 160× 160, 128× 128}. Note that we report the final-epoch accuracy rather than the highest accuracy in the whole training process as is reported in CutMix [13]. We evaluate GradAug and several popular regularization methods on the widely used ResNet-50 [2]. The results are shown in Table 1. GradAug achieves a new state-of-the-art performance of 78.79% based on ResNet-50. Specifically, GradAug significantly outperforms the structure regularization methods by more than 1 point. As illustrated in Eq. 3 and Eq. 4, GradAug has a similar form with structure regularization. The difference is that GradAug introduces self-guided disturbances to augment the raw gradients. The large improvement over the structure regularization methods clearly validates the effectiveness of our proposed method. As shown in Eq. 6, GradAug can be seamlessly combined with data augmentation. We combine GradAug with CutMix (p=0.5) and denote this method as GradAug†. We compare GradAug† with bag of tricks [28] at the bottom of Table 1. It is evident that GradAug† outperforms bag of tricks both in model complexity and accuracy. Note that bag of tricks includes a host of advanced techniques such as model tweaks, training refinements, label smoothing, knowledge distillation, Mixup augmentation, etc., while GradAug is as easy as regular model training. Due to the sub-networks in GradAug training, one natural question arises: Would the training cost of GradAug increase significantly? As stated in [13], typical regularization methods [12, 13, 18] require more training epochs to converge, while GradAug converges with less epochs. Thus the total training time is comparable. The memory cost is also comparable because sub-networks do forward and back-propagation one by one, and only their gradients are accumulated to update the weights. Table 2 shows the comparison on ImageNet. The training cost is measured on an 8× 1080Ti GPU server with a batch size of 512. We can see that the training time of GradAug is comparable with state-of-the-art regularization methods such as CutMix. 4.2 Cifar classification Implementation details. We also evaluate GradAug on Cifar-100 dataset [29]. The dataset has 50,000 images for training and 10,000 images for testing in 100 categories. We choose WideResNet [30] and PyramidNet [23] structures as they achieve state-of-the-art performance on Cifar dataset. We follow the training setting in [23,30] in our experiments. For WideResNet, we train the model for 200 epochs with a batch size of 128. The initial learning rate is 0.1 with cosine decay schedule. Weight decay is 0.0005. PyramidNet is trained for 300 epochs with a batch size of 64. The initial learning rate is 0.25 and decays by a factor of 0.1 at 150 and 225 epochs. Weight decay is 0.0001. We use random scale transformation where input images are resized to one of {32× 32, 28× 28, 24× 24}. The number of sub-networks is n = 3 and the width lower bound is α = 0.8. The results are compared in Table 3. GradAug is comparable with the state-of-the-art CutMix, and it clearly outperforms the best structure regularization method ShakeDrop, which validate the effectiveness of the self-guided augmentation to the raw gradients. We further illustrate this by comparing GradAug† with CutMix + ShakeDrop. On WideResNet, ShakeDrop severely degrades the Top-1 accuracy of CutMix by 2.44%, while GradAug consistently improves CutMix by more than 1 point. The reason is that ShakeDrop introduces random noises to the training process, which is unstable and ineffective in some cases. However, GradAug is a self-guided augmentation to the gradients, which makes it compatible with various structures and data augmentations. 4.3 Ablation study We study the contribution of random width sampling and random transformation to the performance, respectively. We also show the impact of the number of sub-networks n and the width lower bound α. The experiments are conducted on Cifar-100 based on the WideResNet-28-10 backbone. Random width sampling and random transformation. We study the effect of one component by abandoning the other one. First, we do not randomly sample sub-networks. Then GradAug becomes multi-scale training in our experiments. In each iteration, we feed different scaled images to the network. Second, we do not conduct random scale transformation. In each iteration, we sample 3 sub-networks and feed them with the original images. The results are shown in Table 4. Random scale and random width sampling only achieve marginal improvements over the baseline, but GradAug remarkably enhances the baseline (+2.43%). This reaffirms the effectiveness of our method, which unifies data augmentation and structure regularization in the same framework for better performance. Number of sub-networks and width lower bound. There are two hyperparameters in GradAug, the number of sub-networks n and sub-network width lower bound α. We first explore the effect of n. Other settings are the same as Section 4.2. The results are shown in Figure 2. A larger n tends to achieve higher performance since it involves more self-guided gradient augmentations. The accuracy plateaus when n ≥ 3. Note that even one sub-network can significantly improve the baseline. Then we investigate the impact of width lower bound α by fixing other settings. As shown in Figure 2, α = 0.8 achieves the best accuracy, but all the values clearly outperform the baseline. GradAug is not sensitive to these hyperparameters. Empirically, we can set n ≥ 3 and α ∈ [0.7, 0.9]. Effect of different transformations. As shown in experiments above, GradAug is very effective when leveraging random scale transformation and CutMix. Here we further explore other transformations, including random rotation transformation and the combination of random scale and rotation transformations. We conduct the experiments on WideResNet-28-10 and ResNet-50 following the settings above. For random rotation, we randomly rotate the images by a degree of {0◦, 90◦, 180◦, 270◦}. For the combination, the input images are first randomly rotated and then randomly resized. The results are shown in Table 5. It is clear that both transformations (random scale and random rotation) and their combination achieve significant improvements over the baseline. This validates our idea of regularizing sub-networks by different transformed images. Generating sub-networks by stochastic depth. In the experiments above, we generate subnetworks by cutting the network width. Similarly, we can generate sub-network by shrinking the network depth. We follow StochDepth [15] to randomly drop some layers during training. The training settings are the same as [15] and we use random scale transformation to regularize subnetworks. As shown in Table 6, GradAug significantly outperforms the baseline and StochDepth. This demonstrates that GradAug can be generalized to depth-shortened sub-networks and again verifies the effectiveness of our idea. 4.4 Object detection and instance segmentation To evaluate the generalization ability of the learned representations by GradAug, we finetune its ImageNet pretrained model for COCO [31] object detection and instance segmentation. The experiments are based on Mask-RCNN-FPN [6, 32] framework and MMDetection toolbox [33] on ResNet-50 backbone. Mixup and CutMix, two most effective methods in image classification, are employed for comparison. As explained in Section 2, Mixup and CutMix are mixed sample data augmentation methods, which can not be applied to object detection and segmentation. Therefore, we compare these methods by directly finetuning their ImageNet pretrained models on COCO dataset. All models are trained with 1× schedule on COCO dataset. The image resolution is 1000 × 600. The mean Average Precision (AP at IoU=0.50:0.05:0.95) is reported in Table 7. We can see that although Mixup and CutMix achieve large improvements on ImageNet classification, the learned representations can barely benefit object detection and segmentation. In contrast, GradAug-pretrained model considerably improves the performance of Mask-RCNN. This validates that GradAug enables the model to learn well-generalized representations which transfer well to other tasks. Moreover, the training procedure of GradAug can be directly applied to the detection framework. The result (last line of Table 7) shows that it further boosts the performance as compared with GradAug-pretrained and can significantly improve the baseline by +1.7 det mAP and +2.1 seg mAP. The implementation details and qualitative results are in the supplementary material. 4.5 Model robustness Deep neural networks are easily fooled by unrecognizable changes on input images. Developing robust machine learning models is pivotal for safety-critical applications. In this section, we evaluate the model robustness to two kinds of permutations, image corruptions and adversarial attacks. Image corruption. ImageNet-C dataset [34] is created by introducing a set of 75 common visual corruptions to ImageNet classification. ImageNet-C has 15 types of corruptions drawn from four categories (noise, blur, weather and digital). Each type of corruption has 5 levels of severity. Corruptions are applied to validation set only. Models trained on clean ImageNet should be tested on the corrupted validation set without retraining. We follow the evaluation metrics in [34] to test ResNet-50 trained by different regularization methods. The mean corruption error (mCE) is reported in Table 8. Mixup has lower mCE than other methods. We conjecture the reason is that Mixup proportionally combines two samples, which is in a similar manner to the generation of corrupted images. GradAug outperforms the second best competing method CutMix by 1.4%. Note that GradAug can also be combined with Mixup and we denote it as GradAug*. The results in Table 8 reveal that GradAug* further improves Mixup and achieves the lowest mCE. This demonstrates that GradAug is capable of leveraging the advantages of different augmentations. Adversarial attack. We also evaluate model robustness to adversarial samples. Different from image corruption, adversarial attack uses a small distortion which is carefully crafted to confuse a classifier. We use Fast Gradient Sign Method (FGSM) [35] to generate adversarial distortions and conduct white-box attack to ResNet-50 trained by different methods. The classification accuracy on adversarially attacked ImageNet validation set is reported in Table 9. Note that here Mixup is not as robust as to image corruptions, which validates our aforementioned conjecture in the image corruption experiment. GradAug and CutMix are comparable and both significantly outperform other methods. GradAug† further gains improvements over GradAug and CutMix, manifesting superiority of our self-guided gradient augmentation. 4.6 Low data setting Deep neural network models suffer from more severe over-fitting when there is only limited amount of training data. Thus we expect regularization methods to show its superiority in low data setting. However, we find that state-of-the-art methods are not as effective as supposed. For a fair comparison, we follow the same hyperparameter settings in [37]. The backbone network is WideResNet-28-2. We first evaluate different methods on Cifar-10 with 250, 1000 and 4000 labels. Training images are sampled uniformly from 10 categories. We run each model on 5 random data splits and report the mean and standard deviation in Table 10. We observe that CutMix (p=0.5) and ShakeDrop even degrade the baseline model performance, especially when labels are very limited. CutMix mixes images and their labels, which introduces strong noises to the data and ground truth labels. This is effective when there is enough clean labels to learn a good baseline. But when the baseline is weak, this disturbance is too severe. We reduce the impact of CutMix by setting p=0.1, where CutMix is barely used during training. CutMix still harms the baseline when there are only 250 labels, but it becomes beneficial when there are 4000 labels. ShakeDrop has a similar trend with CutMix since it introduces noises to the structure. In contrast, GradAug significantly and consistently enhances the baseline in all cases because it generates self-guided augmentations to the baseline rather than noises. Moreover, GradAug can be easily extended to semi-supervised learning (SSL). We can leverage the full-network to generate labels for unlabeled data and use them to train the sub-networks. See the supplementary material for implementation details. Our GradAug-semi can further improve the performance over GradAug. It even achieves comparable performance with Mean Teacher [36], which is a popular SSL algorithm. We also evaluate the methods on STL-10 dataset [38]. The dataset is designed to test SSL algorithms, where the unlabeled data are sampled from a different distribution than labeled data. Similarly, CutMix and ShakeDrop are not effective while GradAug and GradAug-semi achieve clear improvements. 5 Conclusion In this paper, we propose GradAug which introduces self-guided augmentations to the network gradients during training. The method is easy to implement while being effective. It achieves a new state-of-the-art accuracy on ImageNet classification. The generalization ability is verified on COCO object detection and instance segmentation. GradAug is also robust to image corruption and adversarial attack. We further reveal that current state-of-the-art methods do not perform well in low data setting, while GradAug consistently enhances the baseline in all cases. Acknowledgments This work is partially supported by the National Science Foundation (NSF) under Grant No. 1910844 and NSF/Intel Partnership on MLWiNS under Grant No. 2003198. Broader Impact The proposed regularization method is a generic approach for deep neural networks training. Researchers in the machine learning and computer vision communities should benefit from this work. To the best of our knowledge, we don’t think this research will put anyone at disadvantage. All the experiments are based on the public datasets and follow the standard experimental settings. Thus the method does not leverage biases in the data.
1. What is the focus and contribution of the paper regarding regularization methods? 2. What are the strengths of the proposed approach, particularly in its novelty and effectiveness? 3. What are the weaknesses of the paper, especially regarding experimental validation and robustness claims? 4. Do you have any concerns or suggestions regarding the implementation or application of the proposed method?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposed a new regularization method that leverages differently transformed input to regularize a set of sub-networks originated from the full-network. The idea is that sub-networks should recognize transformed images as the same objects with the full-network. The author analyzes its effect from the gradient view and conducted thorough experiments to validate the idea. The method is demonstrated to outperform state-of-the-art methods on different tasks. Strengths 1.The idea of leveraging different transformed images to regularize the sub-networks and thereby providing self-guided augmentation to the gradients is novel and interesting. It is simple to implement and can be combined with other regularization schemes. Overall, the paper is well-presented and the proposed idea is clearly conveyed. 2.The analysis on the gradient property of different methods is reasonable and useful to understand the differences between this work and other regularization techniques. The claim is also validated in the experiments. 3.The experimental results are very promising based on a set of experiments for a range of tasks. Specifically, the authors demonstrate that state-of-the-art methods can hardly improve in downstream tasks and are not effective in low data setting, while the proposed method shows effectiveness in these tasks. Weaknesses 1.The author may need more experiments to show the effect of different transformations. Though in the supplementary the authors experimented with random rotation and the combination of random scale and rotation, more experiments on different models and larger datasets will make it more convincing. There might be space issue, but I think this experiment should also be put in the main paper rather than the supplementary. 2. The claim on model robustness to adversarial attack may be too strong. FGSM is just one type of adversarial attack approach. To claim the general model robustness, the authors may need more experiments on different adversarial attacks. While I can understand that the proposed method is not focused on adversarial attack, it still would be more precise to claim the robustness to FGSM attack based on the experiments in the paper. 3. Although it is stated that the training procedure of the proposed GradAug is similar to the regular network, the training time for each epoch might be longer due to sub-networks. It would be good to include such discussion and analysis. 4. In ImageNet classification experiment, the images are randomly resized to one of {224, 192, 160, 128}. It is not clear why these particular resolutions are selected. How the image resolution and the number of resolutions that the sub-networks can choose would affect the performance?
NIPS
Title GradAug: A New Regularization Method for Deep Neural Networks Abstract We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy. By combining with CutMix, it further boosts the performance to 79.67%, which outperforms an ensemble of advanced training tricks. The generalization ability is evaluated on COCO object detection and instance segmentation where GradAug significantly surpasses other state-of-the-art methods. GradAug is also robust to image distortions and FGSM adversarial attacks and is highly effective in low data regimes. Code is available at https: //github.com/taoyang1122/GradAug N/A We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy. By combining with CutMix, it further boosts the performance to 79.67%, which outperforms an ensemble of advanced training tricks. The generalization ability is evaluated on COCO object detection and instance segmentation where GradAug significantly surpasses other state-of-the-art methods. GradAug is also robust to image distortions and FGSM adversarial attacks and is highly effective in low data regimes. Code is available at https: //github.com/taoyang1122/GradAug 1 Introduction Deep neural networks have achieved great success in computer vision tasks such as image classification [1, 2], image reconstruction [3, 4], object detection [5, 6] and semantic segmentation [7, 8]. But deep neural networks are often over-parameterized and easily suffering from over-fitting. Regularization [9, 10] and data augmentation [1, 11] are widely used techniques to alleviate the over-fitting problem. Many data-level regularization methods [10, 12, 13] have achieved promising performance in image classification. These methods are similar to data augmentation where they put constraints on the input images. Although effective in image classification, these methods are hard to apply to downstream tasks such as object detection and segmentation due to their special operations. For example, the state-of-the-art CutMix [13] can not be directly applied to object detection because first, mixing samples will destroy the semantics in images; second, it is hard to interpolate the labels in these tasks. Another category of regularization methods imposes constraints on the network structures. [14] proposes that adding noises to the network gradients can improve generalization. Other methods [9,15,16] randomly drop some connections in the network, which implicitly introduce random noises in the training process. These methods are usually more generic but not as effective as data-level regularization. In this paper, we introduce Gradient Augmentation (GradAug), which generates meaningful disturbances to the gradients by the network itself rather than just adding random noises. The idea is that when a random transformation (e.g., random rotation, random scale, random crop, etc.) is applied to an image, a well-generalized network should still recognize the transformed image as the same object. Different from the regular data augmentation technique which only regularizes the full-network, we regularize the representations learned by a set of sub-networks, which are randomly sampled 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. from the full network in terms of the network width (i.e., number of channels in each layer). Since the representation of the full network is composed of sub-networks’ representations due to weights sharing during the training, we expect sub-networks to learn different representations from different transformations, which will lead to a well-generalized and diversified full network representation. We conduct a comprehensive set of experiments to evaluate the proposed regularization method. Using a simple random scale transformation, GradAug can improve the ImageNet Top-1 accuracy of ResNet-50 from 76.32% to 78.79%, which is a new state-of-the-art accuracy. By leveraging a more powerful data augmentation technique – CutMix [13], we can further push the accuracy to 79.67%. The representation’s generalization ability is evaluated on COCO object detection and instance segmentation tasks (Section 4.4). Our ImageNet pretrained model alone can improve the baseline MaskRCNN-R50 by +1.2 box AP and +1.2 mask AP. When applying GradAug to the detection framework, it can outperform the baseline by +1.7 box AP and +2.1 mask AP. Moreover, we demonstrate that GradAug is robust to image corruptions and adversarial attacks (Section 4.5) and is highly effective in low data settings (Section 4.6). 2 Related Work Data augmentation. Data augmentation [1, 11, 17] increases the amount and diversity of training data by linear or non-linear transformations over the original data. In computer vision, it usually includes rotation, flipping, etc. Recently, a series of regularization methods use specially-designed operations on the input images to alleviate over-fitting in deep neural networks. These methods are similar to data augmentation. Cutout [10] randomly masks out a squared region on the image to force the network to look at other image context. Dropblock [18] shares a similar idea with Cutout but it drops a region in the feature maps. Although they have achieved improvements over the regular data augmentation, such region dropout operations may lose information about the original images. Mixup [12] mixes two samples by linearly interpolating both the images and labels. CutMix [13] combines Cutout and Mixup to replace a squared region with a patch from another image. Other mixed sample variants [19, 20] all share similar ideas. While effective in image classification, the mixed sample augmentation is not natural to be applied to tasks such as detection and segmentation due to semantic and label ambiguities. In contrast, the proposed GradAug is a task-agnostic approach which leverages the most common image transformations to regularize sub-networks. This allows the method to be directly applied to different vision tasks and easily amenable for other applications. Structure regularization. Another category of regularization methods imposes constraints on the network weights and structure to reduce over-fitting. [14] points out that adding random noises to the gradients during training can help the network generalize better. Dropout [9] randomly drops some connections during training to prevent units from co-adapting. The random dropping operation also implicitly introduces random noises into the training process. Many following works share the idea of Dropout by randomly dropping network layers or branches. Shake-Shake [21] assigns random weights to residual branches to disturb the forward and backward passes. But it is limited to three-branch architectures. ShakeDrop [22] extends Shake-Shake to two-branch architectures (e.g., ResNet [2] and PyramidNet [23]). However, its application is still limited. [15] randomly drops a subset of layers during training. The final network can be viewed as an ensemble of many shallow networks. Although these methods have shown improvements on image classification, they are usually not as effective as data-level regularization strategies. Moreover, their generalization and effectiveness are not validated on other tasks. GradAug leverages the advantages of both categories of methods. It uses different augmentations to regularize a set of sub-networks generated from the full network in the joint training process. This introduces self-guided disturbances to the gradients of the full network rather than adding random noises. The method is more effective and generic than previous techniques. 3 GradAug 3.1 Algorithm When applying some random transformations to an image, human can still recognize it as the same object. We expect deep neural networks to have the same generalization ability. GradAug aims to regularize sub-networks with differently transformed training samples. There are various of methods to generate sub-networks during training. Previous works [9, 15, 16] usually stochastically drop some neurons, layers or paths. In GradAug, we expect the final full-network to take advantage of the learned representations of the sub-networks. Therefore, we sample sub-networks in a more structured manner, that is by the network width. We define θ as the model parameter. Without loss of generality, we use convolutional layers for illustration, then θ ∈ Rc1×c2×k×k, where c1 and c2 are number of input and output channels, k is the convolution kernel size. We define the width of a sub-network as w ∈ [α, 1.0], where α is the width lower bound. The weights of the sub-network is θw. Different from random sampling, we always sample the first w × 100% channels of the full-network and the sub-network weights are θw ∈ Rwc1×wc2×k×k. In this way, a larger sub-network always share the representations of a smaller sub-network in a weights-sharing training fashion, so it can leverage the representations learned in smaller sub-networks. Iteratively, sub-networks can construct a full-network with diversified representations. Figure 1 shows the class activation maps (CAM) [24] of the sub-network and full-network. The full-network pays attention to several regions of the object because it can leverage the representation of the sub-network. For example, when the sub-network (w = 0.9) focuses on one dog in the image, the full-network shares this attention and uses the other network part to capture the information of another dog. Therefore, the full-network learns richer semantic information in the image, while the baseline model only models a single region and does not fully comprehend the salient information of the image. To make the method simple and generic, we choose among the most commonly used transformations such as random scales, random rotations, random crops, etc. In the experiments, we show that a simple random scale transformation can already achieve state-of-the-art performance on image classification, and it can be directly applied to other applications. Moreover, we can use more powerful augmentations such as CutMix for further enhanced performance. Training procedure. The training procedure of GradAug is very similar to the regular network training. In each training iteration, we train the full-network with the original images, which is the same as the regular training process. Then we additionally sample n sub-networks and train them with randomly transformed images. Finally, we accumulate the losses of full-network and sub-networks to update the model weights. This naive training approach achieves good training accuracy but the testing accuracy is very low. This is caused by the batch normalization (BN) [25] layers. The BN layer will collect a moving average of training batches’ means and variances during training. The collected mean and variance will be used during inference. However, the batch mean and variance in the sub-networks can be very different from those in the full-network because the training samples are randomly transformed. This will cause the final BN mean and variance to be inappropriate for the full-network during inference. But in the training phase, BN uses the mean and variance of the current batch, so the training behaves normally. To obtain the correct BN statistics for the full-network, we do not update BN mean and variance when training the sub-networks. Only the full-network is allowed to collect these statistics. However, the weights in BN layer are still updated by sub-networks because they can be shared with full-network. To further improve the performance, we also leverage two training tricks in [26]. First, we use the output of the full-network as soft labels to train the sub-networks. Second, we always sample the smallest sub-network (i.e., w = α) during training if n > 1. The effect of these two training tricks is provided in the supplementary material. The Pytorch-style pseudo-code of GradAug is presented in Algorithm 1. 3.2 Analysis of gradient property We provide an in-depth analysis of GradAug from the perspective of gradient flow. For simplicity, we consider a fully connected network with 1-D training samples. We define the network as N . The parameter of one layer in the full-network is θ ∈ Rc1×c2 . The parameter of sub-networks is θw as explained in Section 3.1. x ∈ Rd is the training sample and y is its label. The output of the network is denoted as N(θ, x), and the training loss is l(N(θ, x), y) where l is the loss function, which is often the cross entropy in image classification. The loss and gradients in a standard training process Algorithm 1 Gradient Augmentation (GradAug) Input: Network Net. Training image img. Random transformation T . Number of sub-networks n. Subnetwork width lower bound α. . Train full-network. Forward pass, outputf = Net(img) Compute loss, lossf = criterion(output, target) . Regularize sub-networks. for i in range(n) do Sample a sub-network, subneti = Sample(Net, α) Fix BN layer’s mean and variance, subneti.track_running_stats = False Forward pass with transformed images, outputi = subneti(T i(img)) Compute loss with soft labels, lossi = criterion(outputi, outputf ) end for Compute total loss, L = lossf + ∑n i=1 lossi Compute gradients and do backward pass are computed as Lstd = l(N(θ, x), y), gstd = ∂Lstd ∂θ , (1) where gstd ∈ Rc1×c2 . Structure regularization methods [9, 15, 16] randomly drop some connections in the network, and their loss and gradients can be computed as Lsr = l(N(θrand, x), y), gsr = ∂Lsr ∂θrand . (2) We can view gsr has the same shape as gstd where the gradients of disabled connections are 0. Therefore, we can rewrite gsr as gsr = gstd + gnoise, (3) where gnoise ∈ Rc1×c2 is a random matrix which introduces some random disturbances to the gradients. In contrast, GradAug applies more meaningful disturbances to the gradients. Let T be the random transformation operation (e.g., random scale, random rotation, etc.) and T i be the transformation to sub-network i (i = [1, ..., n]). The loss and gradients are computed as: LGA = l(N(θ, x), y) + n∑ i=1 l(N(θwi , T i(x)), N(θ, x)) gGA = ∂l(N(θ, x), y) ∂θ + n∑ i=1 ∂l(N(θwi , T i(x)), N(θ, x)) ∂θwi = gstd + g ′. (4) gGA has a similar form with gsr. The first term is the same as the gradients in standard training. But the second term g′ is derived by the sub-networks with transformed training samples. Since sub-networks are part of the full-network, we call this term “self-guided”. It reinforces good descent directions, leading to improved performance and faster convergence. g′ can be viewed as an augmentation to the raw gradients gstd. It allows different parts of the network to learn diverse representations. The gradients of data-level regularization methods are similar to gstd, with the difference only in the training sample. The gradients are gdr = ∂l(N(θ, f(x)), y) ∂θ , (5) where f is the augmentation method such as CutMix. GradAug can also leverage these augmentations by applying them to the original samples and then following random transformations. The gradients become gGA = ∂l(N(θ, f(x)), y) ∂θ + n∑ i=1 ∂l(N(θwi , T i(f(x))), N(θ, f(x))) ∂θwi = gdr + g ′. (6) g′ is still an augmentation to gdr. Data augmentation can also be combined with other structure regularization methods. However, similar to the derivations in Eq. 2 and Eq. 3, such combination strategy introduces random noises to gdr, which is not as effective as GradAug as shown in Table 3. 4 Experiments We first evaluate the effectiveness of GradAug on image classification. Next, we show the generalization ability of GradAug on object detection and instance segmentation. Finally, we demonstrate that GradAug can improve the model’s robustness to image distortions and adversarial attacks. We also show GradAug is effective in low data settings and can be extended to semi-supervised learning. 4.1 ImageNet classification Implementation details. ImageNet [27] dataset contains 1.2 million training images and 50,000 validation images in 1000 categories. We follow the same data augmentations in [13] to have a fair comparison. On ResNet-50, we train the model for 120 epochs with a batch size of 512. The initial learning rate is 0.2 with cosine decay schedule. We sample n = 3 sub-networks in each training iteration and the width lower bound is α = 0.9. For simplicity, we only use random scale transformation for sub-networks. That is the input images are randomly resized to one of {224× 224, 192× 192, 160× 160, 128× 128}. Note that we report the final-epoch accuracy rather than the highest accuracy in the whole training process as is reported in CutMix [13]. We evaluate GradAug and several popular regularization methods on the widely used ResNet-50 [2]. The results are shown in Table 1. GradAug achieves a new state-of-the-art performance of 78.79% based on ResNet-50. Specifically, GradAug significantly outperforms the structure regularization methods by more than 1 point. As illustrated in Eq. 3 and Eq. 4, GradAug has a similar form with structure regularization. The difference is that GradAug introduces self-guided disturbances to augment the raw gradients. The large improvement over the structure regularization methods clearly validates the effectiveness of our proposed method. As shown in Eq. 6, GradAug can be seamlessly combined with data augmentation. We combine GradAug with CutMix (p=0.5) and denote this method as GradAug†. We compare GradAug† with bag of tricks [28] at the bottom of Table 1. It is evident that GradAug† outperforms bag of tricks both in model complexity and accuracy. Note that bag of tricks includes a host of advanced techniques such as model tweaks, training refinements, label smoothing, knowledge distillation, Mixup augmentation, etc., while GradAug is as easy as regular model training. Due to the sub-networks in GradAug training, one natural question arises: Would the training cost of GradAug increase significantly? As stated in [13], typical regularization methods [12, 13, 18] require more training epochs to converge, while GradAug converges with less epochs. Thus the total training time is comparable. The memory cost is also comparable because sub-networks do forward and back-propagation one by one, and only their gradients are accumulated to update the weights. Table 2 shows the comparison on ImageNet. The training cost is measured on an 8× 1080Ti GPU server with a batch size of 512. We can see that the training time of GradAug is comparable with state-of-the-art regularization methods such as CutMix. 4.2 Cifar classification Implementation details. We also evaluate GradAug on Cifar-100 dataset [29]. The dataset has 50,000 images for training and 10,000 images for testing in 100 categories. We choose WideResNet [30] and PyramidNet [23] structures as they achieve state-of-the-art performance on Cifar dataset. We follow the training setting in [23,30] in our experiments. For WideResNet, we train the model for 200 epochs with a batch size of 128. The initial learning rate is 0.1 with cosine decay schedule. Weight decay is 0.0005. PyramidNet is trained for 300 epochs with a batch size of 64. The initial learning rate is 0.25 and decays by a factor of 0.1 at 150 and 225 epochs. Weight decay is 0.0001. We use random scale transformation where input images are resized to one of {32× 32, 28× 28, 24× 24}. The number of sub-networks is n = 3 and the width lower bound is α = 0.8. The results are compared in Table 3. GradAug is comparable with the state-of-the-art CutMix, and it clearly outperforms the best structure regularization method ShakeDrop, which validate the effectiveness of the self-guided augmentation to the raw gradients. We further illustrate this by comparing GradAug† with CutMix + ShakeDrop. On WideResNet, ShakeDrop severely degrades the Top-1 accuracy of CutMix by 2.44%, while GradAug consistently improves CutMix by more than 1 point. The reason is that ShakeDrop introduces random noises to the training process, which is unstable and ineffective in some cases. However, GradAug is a self-guided augmentation to the gradients, which makes it compatible with various structures and data augmentations. 4.3 Ablation study We study the contribution of random width sampling and random transformation to the performance, respectively. We also show the impact of the number of sub-networks n and the width lower bound α. The experiments are conducted on Cifar-100 based on the WideResNet-28-10 backbone. Random width sampling and random transformation. We study the effect of one component by abandoning the other one. First, we do not randomly sample sub-networks. Then GradAug becomes multi-scale training in our experiments. In each iteration, we feed different scaled images to the network. Second, we do not conduct random scale transformation. In each iteration, we sample 3 sub-networks and feed them with the original images. The results are shown in Table 4. Random scale and random width sampling only achieve marginal improvements over the baseline, but GradAug remarkably enhances the baseline (+2.43%). This reaffirms the effectiveness of our method, which unifies data augmentation and structure regularization in the same framework for better performance. Number of sub-networks and width lower bound. There are two hyperparameters in GradAug, the number of sub-networks n and sub-network width lower bound α. We first explore the effect of n. Other settings are the same as Section 4.2. The results are shown in Figure 2. A larger n tends to achieve higher performance since it involves more self-guided gradient augmentations. The accuracy plateaus when n ≥ 3. Note that even one sub-network can significantly improve the baseline. Then we investigate the impact of width lower bound α by fixing other settings. As shown in Figure 2, α = 0.8 achieves the best accuracy, but all the values clearly outperform the baseline. GradAug is not sensitive to these hyperparameters. Empirically, we can set n ≥ 3 and α ∈ [0.7, 0.9]. Effect of different transformations. As shown in experiments above, GradAug is very effective when leveraging random scale transformation and CutMix. Here we further explore other transformations, including random rotation transformation and the combination of random scale and rotation transformations. We conduct the experiments on WideResNet-28-10 and ResNet-50 following the settings above. For random rotation, we randomly rotate the images by a degree of {0◦, 90◦, 180◦, 270◦}. For the combination, the input images are first randomly rotated and then randomly resized. The results are shown in Table 5. It is clear that both transformations (random scale and random rotation) and their combination achieve significant improvements over the baseline. This validates our idea of regularizing sub-networks by different transformed images. Generating sub-networks by stochastic depth. In the experiments above, we generate subnetworks by cutting the network width. Similarly, we can generate sub-network by shrinking the network depth. We follow StochDepth [15] to randomly drop some layers during training. The training settings are the same as [15] and we use random scale transformation to regularize subnetworks. As shown in Table 6, GradAug significantly outperforms the baseline and StochDepth. This demonstrates that GradAug can be generalized to depth-shortened sub-networks and again verifies the effectiveness of our idea. 4.4 Object detection and instance segmentation To evaluate the generalization ability of the learned representations by GradAug, we finetune its ImageNet pretrained model for COCO [31] object detection and instance segmentation. The experiments are based on Mask-RCNN-FPN [6, 32] framework and MMDetection toolbox [33] on ResNet-50 backbone. Mixup and CutMix, two most effective methods in image classification, are employed for comparison. As explained in Section 2, Mixup and CutMix are mixed sample data augmentation methods, which can not be applied to object detection and segmentation. Therefore, we compare these methods by directly finetuning their ImageNet pretrained models on COCO dataset. All models are trained with 1× schedule on COCO dataset. The image resolution is 1000 × 600. The mean Average Precision (AP at IoU=0.50:0.05:0.95) is reported in Table 7. We can see that although Mixup and CutMix achieve large improvements on ImageNet classification, the learned representations can barely benefit object detection and segmentation. In contrast, GradAug-pretrained model considerably improves the performance of Mask-RCNN. This validates that GradAug enables the model to learn well-generalized representations which transfer well to other tasks. Moreover, the training procedure of GradAug can be directly applied to the detection framework. The result (last line of Table 7) shows that it further boosts the performance as compared with GradAug-pretrained and can significantly improve the baseline by +1.7 det mAP and +2.1 seg mAP. The implementation details and qualitative results are in the supplementary material. 4.5 Model robustness Deep neural networks are easily fooled by unrecognizable changes on input images. Developing robust machine learning models is pivotal for safety-critical applications. In this section, we evaluate the model robustness to two kinds of permutations, image corruptions and adversarial attacks. Image corruption. ImageNet-C dataset [34] is created by introducing a set of 75 common visual corruptions to ImageNet classification. ImageNet-C has 15 types of corruptions drawn from four categories (noise, blur, weather and digital). Each type of corruption has 5 levels of severity. Corruptions are applied to validation set only. Models trained on clean ImageNet should be tested on the corrupted validation set without retraining. We follow the evaluation metrics in [34] to test ResNet-50 trained by different regularization methods. The mean corruption error (mCE) is reported in Table 8. Mixup has lower mCE than other methods. We conjecture the reason is that Mixup proportionally combines two samples, which is in a similar manner to the generation of corrupted images. GradAug outperforms the second best competing method CutMix by 1.4%. Note that GradAug can also be combined with Mixup and we denote it as GradAug*. The results in Table 8 reveal that GradAug* further improves Mixup and achieves the lowest mCE. This demonstrates that GradAug is capable of leveraging the advantages of different augmentations. Adversarial attack. We also evaluate model robustness to adversarial samples. Different from image corruption, adversarial attack uses a small distortion which is carefully crafted to confuse a classifier. We use Fast Gradient Sign Method (FGSM) [35] to generate adversarial distortions and conduct white-box attack to ResNet-50 trained by different methods. The classification accuracy on adversarially attacked ImageNet validation set is reported in Table 9. Note that here Mixup is not as robust as to image corruptions, which validates our aforementioned conjecture in the image corruption experiment. GradAug and CutMix are comparable and both significantly outperform other methods. GradAug† further gains improvements over GradAug and CutMix, manifesting superiority of our self-guided gradient augmentation. 4.6 Low data setting Deep neural network models suffer from more severe over-fitting when there is only limited amount of training data. Thus we expect regularization methods to show its superiority in low data setting. However, we find that state-of-the-art methods are not as effective as supposed. For a fair comparison, we follow the same hyperparameter settings in [37]. The backbone network is WideResNet-28-2. We first evaluate different methods on Cifar-10 with 250, 1000 and 4000 labels. Training images are sampled uniformly from 10 categories. We run each model on 5 random data splits and report the mean and standard deviation in Table 10. We observe that CutMix (p=0.5) and ShakeDrop even degrade the baseline model performance, especially when labels are very limited. CutMix mixes images and their labels, which introduces strong noises to the data and ground truth labels. This is effective when there is enough clean labels to learn a good baseline. But when the baseline is weak, this disturbance is too severe. We reduce the impact of CutMix by setting p=0.1, where CutMix is barely used during training. CutMix still harms the baseline when there are only 250 labels, but it becomes beneficial when there are 4000 labels. ShakeDrop has a similar trend with CutMix since it introduces noises to the structure. In contrast, GradAug significantly and consistently enhances the baseline in all cases because it generates self-guided augmentations to the baseline rather than noises. Moreover, GradAug can be easily extended to semi-supervised learning (SSL). We can leverage the full-network to generate labels for unlabeled data and use them to train the sub-networks. See the supplementary material for implementation details. Our GradAug-semi can further improve the performance over GradAug. It even achieves comparable performance with Mean Teacher [36], which is a popular SSL algorithm. We also evaluate the methods on STL-10 dataset [38]. The dataset is designed to test SSL algorithms, where the unlabeled data are sampled from a different distribution than labeled data. Similarly, CutMix and ShakeDrop are not effective while GradAug and GradAug-semi achieve clear improvements. 5 Conclusion In this paper, we propose GradAug which introduces self-guided augmentations to the network gradients during training. The method is easy to implement while being effective. It achieves a new state-of-the-art accuracy on ImageNet classification. The generalization ability is verified on COCO object detection and instance segmentation. GradAug is also robust to image corruption and adversarial attack. We further reveal that current state-of-the-art methods do not perform well in low data setting, while GradAug consistently enhances the baseline in all cases. Acknowledgments This work is partially supported by the National Science Foundation (NSF) under Grant No. 1910844 and NSF/Intel Partnership on MLWiNS under Grant No. 2003198. Broader Impact The proposed regularization method is a generic approach for deep neural networks training. Researchers in the machine learning and computer vision communities should benefit from this work. To the best of our knowledge, we don’t think this research will put anyone at disadvantage. All the experiments are based on the public datasets and follow the standard experimental settings. Thus the method does not leverage biases in the data.
1. What is the focus and contribution of the paper on regularization methods? 2. What are the strengths of the proposed approach, particularly in terms of its impact on image classification and object detection tasks? 3. What are the weaknesses of the paper, especially regarding its similarity to other works such as slimmable networks? 4. How does the reviewer assess the significance of the adopted training tricks for the performance of GradAug? 5. What are the questions raised by the reviewer regarding the applicability of GradAug on Slimmable networks and vice versa?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a new regularization method, GradAug. During training, GradAug trains sub-networks with various input transformations, and aggregates the losses generated by the full network and sub-networks. Strengths 1. This method significantly boosts image classification and the object detection task. Weaknesses 1. Similarity with slimmable network [*] Defining sub-networks using width ratio is quite similar to that of slimmable network, and also many training tricks were adopted from slimmable network paper. Surely, applying data augmentation for each sub-network is different, but this training scheme is not a new thing. How critical the adopted two training tricks (soft labels and smallest sub-network) for the performance of GradAug? How much degrades the accuracy on ImageNet without these tricks? How about applying GradAug upon Slimmable network? Would it works better than or not? Inversely, can GradAug-trained network be pruned as in Slimmable network? [*] Universally Slimmable Networks and Improved Training Techniques, ICCV 2019.
NIPS
Title Sublinear Time Orthogonal Tensor Decomposition Abstract A recent work (Wang et. al., NIPS 2015) gives the fastest known algorithms for orthogonal tensor decomposition with provable guarantees. Their algorithm is based on computing sketches of the input tensor, which requires reading the entire input. We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i.e., even without reading most of the input tensor. Instead of using sketches to estimate inner products in tensor decomposition algorithms, we use importance sampling. To achieve sublinear time, we need to know the norms of tensor slices, and we show how to do this in a number of important cases. For symmetric tensors T = ∑k i=1 λiu ⊗p i with λi > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below ‖T ‖F then sublinear time is again possible. One of the main strengths of our work is empirical in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy. N/A ∑k i=1 λiu ⊗p i with λi > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below ‖T ‖F then sublinear time is again possible. One of the main strengths of our work is empirical - in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy. 1 Introduction Tensors are a powerful tool for dealing with multi-modal and multi-relational data. In recommendation systems, often using more than two attributes can lead to better recommendations. This could occur, for example, in Groupon where one could look at users, activities, and time (season, time of day, weekday/weekend, etc.), as three attributes to base predictions on (see [13] for a discussion). Similar to low rank matrix approximation, we seek a tensor decomposition to succinctly store the tensor and to apply it quickly. A popular decomposition method is the canonical polyadic decomposition, i.e., the CANDECOMP/PARAFAC (CP) decomposition, where the tensor is decomposed into a sum of rank-1 components [9]. We refer the reader to [23], where applications of CP including data mining, computational neuroscience, and statistical learning for latent variable models are mentioned. A natural question, given the emergence of large data sets, is whether such decompositions can be performed quickly. There are a number of works on this topic [17, 16, 7, 11, 10, 4, 20]. Most related to ours are several recent works of Wang et al. [23] and Tung et al. [18], in which it is shown how to significantly speed up this decomposition for orthogonal tensor decomposition using the randomized technique of linear sketching [15]. In this work we also focus on orthogonal tensor decomposition. The idea in [23] is to create a succinct sketch of the input tensor, from which one can then perform implicit tensor decomposition by approximating inner products in existing decomposition methods. Existing methods, like the power method, involve computing the inner product of a vector, which is now a rank-1 matrix, with another vector, which is now a slice of a tensor. Such inner products can ∗Full version appears on arXiv, 2017. ‡Work done while visiting IBM Almaden. †Supported by XDATA DARPA Air Force Research Laboratory contract FA8750-12-C-0323. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. be approximated much faster by instead computing the inner product of the sketched vectors, which have significantly lower dimension. One can also replace the sketching with sampling to approximate inner products; we discuss some sampling schemes [17, 4] below and compare them to our work. 1.1 Our Contributions We show in a number of important cases, one can achieve the same theoretical guarantees in the work of Wang et al. [23] (which was applied later by Tung et al. [18]), in sublinear time, that is, without reading most of the input tensor. While previous work needs to walk through the input at least once to create a sketch, we show one can instead perform importance sampling of the tensor based on the current iterate, together with reading a few entries of the tensor which help us learn the norms of tensor slices. We use a version of `2-sampling for our importance sampling. One source of speedup in our work and in Wang et al. [23] comes from approximating inner products in iterations in the robust tensor power method (see below). To estimate 〈u, v〉 for n-dimensional vectors u and v, their work computes sketches S(u) and S(v) and approximates 〈u, v〉 ≈ 〈S(u), S(v)〉. Instead, if one has u, one can sample coordinates i proportional to u2i , which is known as `2-sampling [14, 8]. One estimates 〈u, v〉 as vi‖u‖ 2 2 ui , which is unbiased and has variance O(‖u‖22‖v‖22). These guarantees are similar to those using sketching, though the constants are significantly smaller (see below), and unlike sketching, one does not need to read the entire tensor to perform such sampling. Symmetric Tensors: As in [23], we focus on orthogonal tensor decomposition of symmetric tensors, though we explain the extension to the asymmetric case below. Symmetric tensors arise in engineering applications, for example, to represent the symmetric tensor field of stress, strain, and anisotropic conductivity. Another example is diffusion MRI in which one uses symmetric tensors to describe diffusion in the brain or other parts of the body. In spectral methods symmetric tensors are exactly those that come up in Latent Dirichlet Allocation problems. Although one can symmetrize a tensor using simple matrix operations (see, e.g., [1]), we cannot do this in sublinear time. In orthogonal tensor decompostion of a symmetric matrix, there is an underlying n× n · · ·n tensor T∗ = ∑k i=1 λiv ⊗p i , and the input tensor is T = T ∗+E, where ‖E ‖2 ≤ . We have λ1 > λ2 > · · · > λk > 0 and that {vi}ki=1 is a set of orthonormal vectors. The goal is to reconstruct approximations v̂i to the vectors vi, and approximations λ̂i to the λi. Our results naturally generalize to tensors with different lengths in different dimensions. For simplicity, we first focus on order p = 3. In the robust tensor power method [1], one generates a random initial vector u, and performs T update steps û = T(I, u, u)/‖T(I, u, u)‖2, where T(I, u, u) = [ n∑ j=1 n∑ `=1 T1,j,` uju`, n∑ j=1 n∑ `=1 T2,j,` uju`, · · · , n∑ j=1 n∑ `=1 Tn,j,` uju` ] . The matrices T1,∗,∗, . . . ,Tn,∗,∗ are referred to as the slices. The vector û typically converges to the top eigenvector in a small number of iterations, and one often chooses a small number L of random initial vectors to boost confidence. Successive eigenvectors can be found by deflation. The algorithm and analysis immediately extend to higher order tensors. We use `2-sampling to estimate T(I, u, u). To achieve the same guarantees as in [23], for typical settings of parameters (constant k and several eigenvalue assumptions) naïvely one needs to take O(n2) `2-samples from u for each slice in each iteration, resulting in Ω(n3) time and destroying our sublinearity. We observe that if we additionally knew the squared norms ‖T1,∗,∗ ‖2F , . . . , ‖Tn,∗,∗ ‖2F , then we could take O(n2) `2-samples in total, where we take ‖Ti,∗,∗ ‖2F ‖T ‖2F ·O(n2) `2-samples from the i-th slice in expectation. Perhaps in some applications such norms are known or cheap to compute in a single pass, but without further assumptions, how can one obtain such norms in sublinear time? If T is a symmetric tensor, then Tj,j,j = ∑k i=1 λiv 3 i,j + Ej,j,j . Note that if there were no noise, then we could read off approximations to the slice norms, since ‖Tj,∗,∗ ‖2F = ∑k i=1 λ 2 i v 2 i,j , and so T 2/3 j,j,j is an approximation to ‖Tj,∗,∗ ‖2F up to factors depending on k and the eigenvalues. However, there is indeed noise. To obtain non-trivial guarantees, the robust tensor power method assumes ‖E ‖2 = O(1/n), where ‖E ‖2 = sup ‖u‖2=‖v‖2=‖w‖2=1 E(u, v, w) = sup ‖u‖2=‖v‖2=‖w‖2=1 n∑ i=1 n∑ j=1 n∑ k=1 Ei,j,k uivjwk, which in particular implies |Ej,j,j | = O(1/n). This assumption comes from the Θ(1/ √ n)correlation of the random initial vector to v1. This noise bound does not trivialize the problem; indeed, Ej,j,j can be chosen adversarially subject to |Ej,j,j | = O(1/n), and if the vi were random unit vectors and the λi and k were constant, then ∑k i=1 λiv 3 i,j = O(1/n 3/2), which is small enough to be completely masked by the noise Ej,j,j . Nevertheless, there is a lot of information about the slice norms. Indeed, suppose k = 1, λ1 = Θ(1), and ‖T ‖F = 1. Then Tj,j,j = Θ(v31,j) + Ej,j,j , and one can show ‖Tj,∗,∗ ‖2F = λ21v21,j ± O(1/n). Again using that |Ej,j,j | = O(1/n), this implies ‖Tj,∗,∗ ‖2F = ω(n−2/3) if and only if Tj,j,j = ω(1/n), and therefore one would notice this by reading Tj,j,j . There can only be o(n2/3) slices j for which ‖Tj,∗,∗ ‖2F = ω(n−2/3), since ‖T ‖2F = 1. Therefore, for each of them we can afford to take O(n2) `2-samples and still have an O(n2+2/3) = o(n3) sublinear running time. The remaining slices all have ‖Tj,∗,∗ ‖2F = O(n−2/3), and therefore if we also take O(n1/3) `2-samples from every slice, we will also estimate the contribution to T(I, u, u) from these slices well. This is also a sublinear O(n2+1/3) number of samples. While the previous paragraph illustrates the idea for k = 1, for k = 2 we need to read more than the Tj,j,j entries to decide how many `2-samples to take from a slice. The analysis is more complicated because of sign cancellations. Even for k = 2 we could have Tj,j,j = λ1v31,j + λ2v 3 2,j + Ej,j,j , and if v1,j = −v2,j then we may not detect that ‖Tj,∗,∗ ‖2F is large. We fix this by also reading the entries Ti,j,j ,Tj,i,j , and Tj,j,i for every i and j. This is still only O(n2) entries and so we are still sublinear time. Without additional assumptions, we only give a formal analysis of this for k ∈ {1, 2}. More importantly, if instead of third-order symmetric tensors we consider p-th order symmetric tensors for even p, we do not have such sign cancellations. In this case we do not have any restrictions on k for estimating slice norms. One does need to show after deflation, the slice norms can still be estimated; this holds because the eigenvectors and eigenvalues are estimated sufficiently well. We also give several per-iteration optimizations of our algorithm, based on careful implementations of generating a sorted list of random numbers and random permutations. We find empirically (see below) that we are much faster per iteration than previous sketching algorithms, in addition to not having to read the entire input tensor in a preprocessing step. Asymmetric Tensors: For asymmetric tensors, e.g., 3rd-order tensors of the form ∑k i=1 λiui⊗ vi⊗ wi, it is impossible to achieve sublinear time in general, since it is hard to distinguish T = ei⊗ej⊗ek for random i, j, k ∈ {1, 2, . . . , n} from T = 0⊗3. We make a necessary and sufficient assumption that all the entries of the ui are less than n−γ for an arbitrarily small constant γ > 0. In this case, all slice norms are o(n−γ) and by taking O(n2−γ) samples from each slice we achieve sublinear time. We can also apply such an assumption to symmetric tensors. Empirical Results: One of the main strengths of our work is our empirical results. In each iteration we approximate T(I, u, u) a total of B times independently and take the median to increase our confidence. In the notation of [23], B corresponds to the number of independent sketches used. While the median works empirically, there are some theoretical issues with it discussed in Remark 4. Also let b be the total number of `2-samples we take per iteration, which corresponds to the sketch size in the notation of [23]. We found that empirically we can set B and b to be much smaller than that in [23] and achieve the same error guarantees. One explanation for this is that the variance bound we obtain via importance sampling is a factor of 43 = 64 smaller than in [23], and for p-th order tensors, a factor of 4p smaller. To give an idea of how much smaller we can set b andB, to achieve roughly the same squared residual norm error on the synthetic data sets of dimension 1200 for finding a good rank-1 approximation, the algorithm of [23] would need to set parameters b = 216 and B = 50, whereas we can set b = 10× 1200 and B = 5. Our running time is 2.595 seconds and we have no preprocessing time, whereas the algorithm of [23] has a running time of 116.3 seconds and 55.34 seconds of preprocessing time. We refer the reader to Table 1 in Section 3. In total we are over 50 times faster. We also demonstrate our algorithm in a real-world application using real datasets, even when the datasets are sparse. Namely, we consider a spectral algorithm for Latent Dirichlet Allocation [1, 2] which uses tensor decomposition as its core computational step. We show a significant speedup can be achieved on tensors occurring in applications such as LDA, and we refer the reader to Table 2 in Section 3. For example, on the wiki [23] dataset with a tensor dimension of 200, we run more than 5 times faster than the sketching-based method. Previous Sampling Algorithms: Previous sampling-based schemes of [17, 4] do not achieve our guarantees, because [17] uses uniform sampling, which does not work for tensors with spiky elements, while the non-uniform sampling in [4] requires touching all of the entries in the tensor and making two passes over it. Notation Let [n] denote {1, 2, . . . , n}. Let ⊗ denote the outer product, and u⊗3 = u ⊗ u ⊗ u. Let T ∈ Rnp , where p is the order of tensor T and n is the dimension of tensor T. Let 〈A,B〉 denote the entry-wise inner product between two tensors A,B ∈ Rnp , e.g., 〈A,B〉 = ∑n i1=1 ∑n i2=1 · · · ∑n ip=1 Ai1,i2,··· ,ip ·Bi1,i2,··· ,ip . For a tensor A ∈ Rn p , ‖A ‖F = ( ∑n i1=1 ∑n i2=1 · · · ∑n ip=1 A2i1,··· ,ip) 1 2 . For random variable X let E[X] denote its expectation of X and V[X] its variance (if these quantities exist). 2 Main Results We explain the details of our main results in this section. First, we state the importance sampling lemmas for our tensor application. Second, we explain how to quickly produce a list of random tuples according to a certain distribution needed by our algorithm. Third, we combine the first and the second parts to get a fast way of approximating tensor contractions, which are used as subroutines in each iteration of the robust tensor power method. We then provide our main theoretical results, and how to estimate the slice norms needed by our main algorithm. Importance sampling lemmas. Approximating an inner product is a simple application of importance sampling. Tensor contraction T(u, v, w) can be regarded as the inner product between two n3-dimensional vectors, and thus importance sampling can be applied. Lemma 1 suggests that we can take a few samples according to their importance, e.g., we can sample Ti,j,k uivjwk with probability |uivjwk|2/‖u‖22‖v‖22‖w‖22. As long as the number of samples is large enough, it will approximate the true tensor contraction ∑ i ∑ j ∑ kTi,j,k uivjwk with small variance after a final rescaling. Lemma 1. Suppose random variable X = Ti,j,k uivjwk/(piqjrk) with probability piqjrk where pi = |ui|2/‖u‖22, qj = |vj |2/‖v‖22, and rk = |wk|2/‖w‖22, and we take L i.i.d. samples of X , denoted X1, X2, · · · , XL. Let Y = 1L ∑L `=1X`. Then (1) E[Y ] = 〈T, u ⊗ v ⊗ w〉, and (2) V[Y ] ≤ 1L‖T ‖ 2 F · ‖u⊗ v ⊗ w‖2F . Similarly, we also have importance sampling for each slice Ti,∗,∗, i.e., “face” of T. Lemma 2. For all i ∈ [n], suppose random variable Xi = Ti,j,k vjwk/(qjrk) with probability qjrk, where qj = |vj |2/‖v‖22 and rk = |wk|2/‖w‖22, and we take Li i.i.d. samples of Xi, say Xi1, X i 2, · · · , XiLi . Let Y i = 1Li ∑L `=1X i ` . Then (1) E[Y i] = 〈Ti,∗,∗, v ⊗ w〉 and (2) V[Y i] ≤ 1 Li ‖Ti,∗,∗ ‖2F ‖v ⊗ w‖2F . Generating importance samples in linear time. We need an efficient way to sample indices of a vector based on their importance. We view this problem as follows: imagine [0, 1] is divided into z “bins” with different lengths corresponding to the probability of selecting each bin, where z is the number of indices in a probability vector. We generate m random numbers uniformly from [0, 1] and see which bin each random number belongs to. If a random number is in bin i, we sample the i-th index of a vector. There are known algorithms [6, 19] to solve this problem in O(z +m) time. We give an alternative algorithm GENRANDTUPLES. Our algorithm combines Bentley and Saxe’s algorithm [3] for efficiently generating m sorted random numbers in O(m) time, and Knuth’s shuffling algorithm [12] for generating a random permutation of [m] in O(m) time. We use the notation CUMPROB(v, w) and CUMPROB(u, v, w) for the algorithm creating the distributions on Rn2 and Rn3 of Lemma 2 and Lemma 1, respectively. We note that naïvely applying previous algorithms would require z = O(n2) and z = O(n3) time to form these two distributions, but we can take O(m) samples from them implicitly in O(n+m) time. Fast approximate tensor contractions. We propose a fast way to approximately compute tensor contractions T(I, v, w) and T(u, v, w) with a sublinear number of samples of T, as shown in Alogrithm 1 and Algorithm 2. Naïvely computing tensor contractions using all of the entries of T gives an exact answer but could take n3 time. Also, to keep our algorithm sublinear time, we never explicitly compute the deflated tensor; rather we represent it implicitly and sample from it. Algorithm 1 Subroutine for approximate tensor contraction T(I, v, w) 1: function APPROXTIVW(T, v, w, n,B, {b̂i}) 2: q̃, r̃ ← CUMPROB(v, w) 3: for d = 1→ B do 4: L ← GENRANDTUPLES( ∑n i=1 b̂i, q̃, r̃) 5: for i = 1→ n do 6: s(d)i ← 0 7: for ` = 1→ b̂i do 8: (j, k)← L(i−1)b+` 9: s(d)i ← s (d) i + 1 qjrk Ti,j,k ·uj · uk 10: T̂(I, v, w)i ← median d∈[B] s (d) i /b̂i, ∀i ∈ [n] 11: return T̂(I, v, w) Algorithm 2 Subroutine for approximate tensor contraction T(u, v, w) 1: function APPROXTUVW(T, u, v, w, n,B, b̂) 2: p̃, q̃, r̃ ← CUMPROB(u, v, w) 3: for d = 1→ B do 4: L ← GENRANDTUPLES(̂b, p̃, q̃, r̃). 5: s(d) ← 0 6: for (i, j, k) ∈ L do 7: s(d) ← s(d) + 1piqjrk Ti,j,k ·ui · uj · uk 8: s(d) ← s(d)/b̂ 9: T̂(u, v, w)← median d∈[B] s(d) 10: return T̂(u, v, w) The following theorem gives the error bounds of APPROXTIVW and APPROXTUVW (in Algorithm 1 and 2). Let b̂i be the number samples we take from slice i ∈ [n] in APPROXTIVW, and let b̂ denote the total number of samples in our algorithm. Theorem 3. For T ∈ Rn×n×n and u ∈ Rn with ‖u‖2 = 1, define the number ε1,T(u) = T̂(u, u, u) − T(u, u, u) and the vector ε2,T(u) = T̂(I, u, u) − T(I, u, u). For any b > 0, if b̂i & b‖Ti,∗,∗ ‖2F /‖T ‖2F then the following bounds hold 1: E[|ε1,T(u)|2] = O(‖T ‖2F /b), and E[‖ε2,T(u)‖22] = O(n‖T ‖2F /b). In addition, for any fixed ω ∈ Rn with ‖ω‖2 = 1, E[〈ω, ε2,T (u)〉2] = O(‖T ‖2F /b). (1) Eq. (1) can be obtained by observing that each random variable [ε2,T(u)]i is independent and so V[〈ω, ε2,T(u)〉] = ∑n i=1 ω 2 i ‖Ti,∗,∗ ‖2F b̂i . ( ∑n i=1 ω 2 i ) ‖T ‖2F b = ‖T ‖2F b . Remark 4. In [23], the coordinate-wise median of B estimates to the T(I, v, w) is used to boost the success probability. There appears to be a gap [21] in their argument as it is unclear how to achieve (1) after taking a coordinate-wise median, which is (7) in Theorem 1 of [23]. To fix this, we instead pay a factor proportional to the number of iterations in Algorithm 3 in the sample complexity b̂. Since we have expectation bounds on the quantities in Theorem 3, we can apply a Markov bound and a union bound across all iterations. This suffices for our main theorem concerning sublinear time below. One can obtain high probability bounds by running Algorithm 3 multiple times independently, and taking coordinate-wise medians of the output eigenvectors. Empirically, our algorithm works even if we take the median in each iteration, which is done in line 10 in Algorithm 1. Replacing Theorem 1 in [23] by our Theorem 3, the rest of the analysis in [23] is unchanged. Our Algorithm 3 is the same as the sketching-based robust tensor power method in [23], except for lines 10, 12, 15, and 17, where the sketching-based approximate tensor contraction is replaced by our importance sampling procedures APPROXTUVW and APPROXTIVW. Rather than use Theorem 2 of Wang et al. [23], the main theorem concerning the correctness of the robust tensor decomposition algorithm, we use a recent improvement of it by Wang and Anandkumar in Theorems 4.1 and 4.2 of [22], which states general guarantees for any algorithm satisfying per iteration noise guarantees. These theorems also remove many of the earlier eigenvalue assumptions in Theorem 2 of [23]. Theorem 5. (Theorem 4.1 and 4.2 of [22]), Suppose T = T∗+E, where T = ∑k i=1 λiv ⊗3 i with λi > 0 and orthonormal basis vectors {v1, . . . , vk} ⊆ Rn, n ≥ k. Let λmax, λmin be the largest and smallest values in {λi}ki=1 and {λ̂i, v̂i}ki=1 be outputs of the robust tensor power method. There exist absolute constants K0, C0, C1, C2, C3 > 0 such that if E satisfies ‖E(I, u(τ)t , u (τ) t )‖2 ≤ , |E(vi, u (τ) t , u (τ) t )| ≤ min{ / √ k,C0λmin/n}, (2) 1For two functions f, g, we use the shorthand f . g (resp. &) to indicate that f ≤ Cg (resp. ≥) for some absolute constant C. Algorithm 3 Our main algorithm 1: function IMPORTANCESAMPLINGRB(T, n,B, b) 2: if si are known, where ‖Ti,∗,∗ ‖2F . si then 3: b̂i ← b · si/‖T ‖2F ,∀i ∈ [n] 4: else 5: b̂i ← b/n, ∀i ∈ [n] 6: b̂ = ∑n i=1 b̂i 7: for ` = 1→ L do 8: u(`) ←INITIALIZATION 9: for t = 1→ T do 10: u(`) ← APPROXTIVW(T, u(`), u(`), n,B, {b̂i}) 11: u(`) ← u(`)/‖u(`)‖2 12: λ(`) ← APPROXTUVW(T, u(`), u(`), u(`), n,B, b̂) 13: `∗ ← arg max`∈[L] λ(`), u∗ ← u(` ∗) 14: for t = 1→ T do 15: u∗ ← APPROXTIVW(T, u∗, u∗, n,B, {b̂i}) 16: u∗ ← u∗/‖u∗‖2 17: λ∗ ← APPROXTUVW(T, u∗, u∗, u∗, n,B, b̂) 18: return λ∗, u∗ for all i ∈ [k], t ∈ [T ], and τ ∈ [L] and furthermore ≤ C1 · λmin/ √ k, T = Ω(log(λmaxn/ )), L ≥ max{K0, k} log(max{K0, k}), then with probability at least 9/10, there exists a permutation π : [k]→ [k] such that |λi − λ̂π(i)| ≤ C2 , ‖vi − v̂π(i)‖2 ≤ C3 /λi, ∀i = 1, · · · , k. Combining the previous theorem with our importance sampling analysis, we obtain: Theorem 6 (Main). Assume the notation of Theorem 5. For each j ∈ [k], suppose we take b̂(j) =∑n i=1 b̂ (j) i samples during the power iterations for recovering λ̂j and v̂j , the number of samples for slice i is b̂(j)i & bkT‖[T− ∑j−1 l=1 λ̂lv̂ ⊗3 l ]i,∗,∗‖2F /‖T− ∑j−1 l=1 λ̂lv̂ ⊗3 l ‖2F where b & n‖T ‖2F / 2 + ‖T ‖2F /min{ / √ k, λmin/n}2. Then the output guarantees of Theorem 5 hold for Algorithm 3 with constant probability. Our total time is O(LTk2b̂) and the space is O(nk), where b̂ = maxj∈[k] b̂(j). In Theorem 3, if we require b̂i = b‖Ti,∗,∗ ‖2F /‖T ‖2F , we need to scan the entire tensor to compute ‖Ti,∗,∗ ‖2F , making our algorithm not sublinear. With the following mild assumption in Theorem 7, our algorithm is sublinear when sampling uniformly (̂bi = b/n) without computing ‖Ti,∗,∗ ‖2F : Theorem 7 (Bounded slice norm). There is a constant α > 0, a constant β ∈ (0, 1] and a sufficiently small constant γ > 0, such that, for any 3rd order tensor T = T∗+E ∈ Rn3 with rank(T∗) ≤ nγ , λk ≥ 1/nγ , if ‖Ti,∗,∗ ‖2F ≤ 1nβ ‖T ‖ 2 F for all i ∈ [n], and E satisfies (2), then Algorithm 3 runs in O(n3−α) time. The condition β ∈ (0, 1] is a practical one. When β = 1, all tensor slices have equal Frobenius norm. The case β = 0 only occurs when ‖Ti,∗,∗ ‖F = ‖T ‖F ; i.e., all except one slice is zero. This theorem can also be applied to asymmetric tensors, since the analysis in [23] can be extended to them. For certain cases, we can remove the bounded slice norm assumption. The idea is to take a sublinear number of samples from the tensor to obtain upper bounds on all slice norms. In the full version, we extend the algorithm and analysis of the robust tensor power method to p > 3 by replacing contractions T(u, v, w) and T(I, v, w) with T(u1, u2, · · · , up) and T(I, u2, · · · , up). As outlined in Section 1, when p is even, because we do not have sign cancellations we can show: Theorem 8 (Even order). There is a constant α > 0 and a sufficiently small constant γ > 0, such that, for any even order-p tensor T = T∗+E ∈ Rnp with rank(T∗) ≤ nγ , p ≤ nγ and λk ≥ 1/nγ . For any sufficiently large constant c0, there exists a sufficiently small constant c > 0, for any ∈ (0, cλk/(c0p2kn(p−2)/2)) if E satisfies ‖E ‖2 ≤ /(c0 √ n), Algorithm 3 runs in O(np−α) time. As outlined in Section 1, for p = 3 and small k we can take sign considerations into account: Theorem 9 (Low rank). There is a constant α > 0 and a sufficiently small constant γ > 0 such that for any symmetric tensor T = T∗+E ∈ Rn3 with E satisfying (2), rank(T∗) ≤ 2, and λk ≥ 1/nγ , then Algorithm 3 runs in O(n3−α) time. 3 Experiments 3.1 Experiment Setup and Datasets Our implementation shares the same code base 1 as the sketching-based robust tensor power method proposed in [23]. We ran our experiments on an i7-5820k CPU with 64 GB of memory in singlethreaded mode. We ran two versions of our algorithm: the version with pre-scanning scans the full tensor to accurately measure per-slice Frobenius norms and make samples for each slice in proportion to its Frobenius norm in APPROXTIVW; the version without pre-scanning assumes that the Frobenius norm of each slice is bounded by 1nα ‖T ‖ 2 F , α ∈ (0, 1] and uses b/n samples per slice, where b is the total number of samples our algorithm makes, analogous to the sketch length b in [23]. Synthetic datasets. We first generated an orthonormal basis {vi}ki=1 and then computed the synthetic tensor as T∗ = ∑k i=1 λiv ⊗3 i , with λ1 ≥ · · · ≥ λk. Then we normalized T ∗ such that ‖T∗ ‖F = 1, and added a symmetric Gaussian noise tensor E where Eijl ∼ N (0, σn1.5 ) for i ≤ j ≤ l. Then σ controls the noise-to-signal ratio and we kept it as 0.01 in all our synthetic tensors. For the eigenvalues λi, we generated three different decays: inverse decay λi = 1i , inverse square decay λi = 1 i2 , and linear decay λi = 1− i−1 k . We also set k = 100 when generating tensors, since higher rank eigenvalues were almost indistinguishable from the added noise. To show the scalability of our algorithm, we generated tensors with different dimensions: n = 200, 400, 600, 800, 1000, 1200. Real-life datasets. Latent Dirichlet Allocation [5] (LDA) is a powerful generative statistical model for topic modeling. A spectral method has been proposed to solve LDA models [1, 2] and the most critical step in spectral LDA is to decompose a symmetric K × K × K tensor with orthogonal eigenvectors, where K is the number of modeled topics. We followed the steps in [1, 18] and built a K ×K ×K tensor TLDA for each dataset, and then ran our algorithms directly on TLDA to see how it works on those tensors in real applications. In our experiments we keep K = 200. We used the two same datasets as the previous work [23]: Wiki and Enron, as well as four additional real-life datasets. We refer the reader to our GitHub repository 2 for our code and full results. 3.2 Results We considered running time and the squared residual norm to evaluate the performance of our algorithms. Given a tensor T ∈ Rn3 , let ‖T− ∑k i=1 λiui ⊗ vi ⊗ wi‖2F denote the squared residual norm where {(λ1, u1, v1, w1), · · · , (λk, uk, vk, wk)} are the eigenvalue/eigenvectors obtained by the robust power method. To reduce the experiment time we looked only for the first eigenvalue and eigenvector, but our algorithm is capable of finding any number of eigenvalues/eigenvectors. We list the pre-scanning time as preprocessing time in tables. It only depends on the tensor dimension n and unlike the sketching based method, it does not depend on b. Pre-scanning time is very short, because it only requires one pass of sequential access to the tensor which is very efficient on hardware. Sublinear time verification. Our theoretical result suggests the total number of samples bno-prescan for our algorithm without pre-scanning is n1−α(α ∈ (0, 1]) times larger than bprescan for our algorithm with pre-scanning. But in experiments we observe that when bno-prescan = bprescan both algorithms achieve very similar accuracy, indicating that in practice α ≈ 1. Synthetic datasets. We ran our algorithm on a large number of synthetic tensors with different dimensions and different eigengaps. Table 1 shows results for a tensor with 1200 dimensions with 100 non-zero eigenvalues decaying as λi = 1i2 . To reach roughly the same residual norm, the running time of our algorithm is over 50 times faster than that of the sketching-based robust tensor power method, thanks to the fact that we usually need a relatively small B and b to get a good residual, and the hidden constant factor in the running time of sampling is much smaller than that of sketching. Our algorithm scales well on large tensors due to its sub-linear nature. In Figure 1(a), for the sketching-based method we kept b = 216, B = 30 for n ≤ 800 and B = 50 for n > 800 (larger n requires more sketches to observe a reasonable recovery). For our algorithm, we chose b and B such 1http://yining-wang.com/fftlda-code.zip 2https://github.com/huanzhang12/sampling_tensor_decomp/ that for each n, our residual norm is on-par or better than the sketching-based method. Our algorithm needs much less time than the sketching-based one over all dimensions. Another advantage of our algorithm is that there are zero or very minimal preprocessing steps. In Figure 1(b), we can see how the preprocessing time grows to prepare sketches when the dimension increases. For applications where only the first few eigenvectors are needed, the preprocessing time could be a large overhead. Real-life datasets Due to the small tensor dimension (200), our algorithm shows less speedup than the sketching-based method. But it is still 2 ∼ 6 times faster in each of the six real-life datasets, achieving the same squared residual norm. Table 2 reports results for one of the datasets in many different settings of (b, B). Like in synthetic datasets, we also empirically observe that the constant b in importance sampling is much smaller than the b used in sketching to get the same error guarantee.
1. What is the focus of the paper regarding tensor decomposition? 2. What is the novel approach introduced by the authors in the paper? 3. How does the proposed method differ from previous works that utilize sketching? 4. What are the experimental results that support the analysis presented in the paper? 5. How would you assess the overall quality and significance of the paper's contributions?
Review
Review Authors modify the robust power method to use approximate inner products computed via importance sampling. This results in analogous approximation guarantees to modifications which utilize sketching, but with more favorable computation when the tensor is in unstructured form. Experimental results support the analysis.Authors clearly indicate the nature of the contribution: a reasonable idea (using importance sampling for inner product approximation instead of sketching) which has been apparently overlooked for tensor decomposition to date. The paper is thorough in exploring this idea both analytically and experimentally. Review scores reflect this reviewers impression as ``an extremely well executed, albeit incremental, contribution.''
NIPS
Title Sublinear Time Orthogonal Tensor Decomposition Abstract A recent work (Wang et. al., NIPS 2015) gives the fastest known algorithms for orthogonal tensor decomposition with provable guarantees. Their algorithm is based on computing sketches of the input tensor, which requires reading the entire input. We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i.e., even without reading most of the input tensor. Instead of using sketches to estimate inner products in tensor decomposition algorithms, we use importance sampling. To achieve sublinear time, we need to know the norms of tensor slices, and we show how to do this in a number of important cases. For symmetric tensors T = ∑k i=1 λiu ⊗p i with λi > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below ‖T ‖F then sublinear time is again possible. One of the main strengths of our work is empirical in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy. N/A ∑k i=1 λiu ⊗p i with λi > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below ‖T ‖F then sublinear time is again possible. One of the main strengths of our work is empirical - in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy. 1 Introduction Tensors are a powerful tool for dealing with multi-modal and multi-relational data. In recommendation systems, often using more than two attributes can lead to better recommendations. This could occur, for example, in Groupon where one could look at users, activities, and time (season, time of day, weekday/weekend, etc.), as three attributes to base predictions on (see [13] for a discussion). Similar to low rank matrix approximation, we seek a tensor decomposition to succinctly store the tensor and to apply it quickly. A popular decomposition method is the canonical polyadic decomposition, i.e., the CANDECOMP/PARAFAC (CP) decomposition, where the tensor is decomposed into a sum of rank-1 components [9]. We refer the reader to [23], where applications of CP including data mining, computational neuroscience, and statistical learning for latent variable models are mentioned. A natural question, given the emergence of large data sets, is whether such decompositions can be performed quickly. There are a number of works on this topic [17, 16, 7, 11, 10, 4, 20]. Most related to ours are several recent works of Wang et al. [23] and Tung et al. [18], in which it is shown how to significantly speed up this decomposition for orthogonal tensor decomposition using the randomized technique of linear sketching [15]. In this work we also focus on orthogonal tensor decomposition. The idea in [23] is to create a succinct sketch of the input tensor, from which one can then perform implicit tensor decomposition by approximating inner products in existing decomposition methods. Existing methods, like the power method, involve computing the inner product of a vector, which is now a rank-1 matrix, with another vector, which is now a slice of a tensor. Such inner products can ∗Full version appears on arXiv, 2017. ‡Work done while visiting IBM Almaden. †Supported by XDATA DARPA Air Force Research Laboratory contract FA8750-12-C-0323. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. be approximated much faster by instead computing the inner product of the sketched vectors, which have significantly lower dimension. One can also replace the sketching with sampling to approximate inner products; we discuss some sampling schemes [17, 4] below and compare them to our work. 1.1 Our Contributions We show in a number of important cases, one can achieve the same theoretical guarantees in the work of Wang et al. [23] (which was applied later by Tung et al. [18]), in sublinear time, that is, without reading most of the input tensor. While previous work needs to walk through the input at least once to create a sketch, we show one can instead perform importance sampling of the tensor based on the current iterate, together with reading a few entries of the tensor which help us learn the norms of tensor slices. We use a version of `2-sampling for our importance sampling. One source of speedup in our work and in Wang et al. [23] comes from approximating inner products in iterations in the robust tensor power method (see below). To estimate 〈u, v〉 for n-dimensional vectors u and v, their work computes sketches S(u) and S(v) and approximates 〈u, v〉 ≈ 〈S(u), S(v)〉. Instead, if one has u, one can sample coordinates i proportional to u2i , which is known as `2-sampling [14, 8]. One estimates 〈u, v〉 as vi‖u‖ 2 2 ui , which is unbiased and has variance O(‖u‖22‖v‖22). These guarantees are similar to those using sketching, though the constants are significantly smaller (see below), and unlike sketching, one does not need to read the entire tensor to perform such sampling. Symmetric Tensors: As in [23], we focus on orthogonal tensor decomposition of symmetric tensors, though we explain the extension to the asymmetric case below. Symmetric tensors arise in engineering applications, for example, to represent the symmetric tensor field of stress, strain, and anisotropic conductivity. Another example is diffusion MRI in which one uses symmetric tensors to describe diffusion in the brain or other parts of the body. In spectral methods symmetric tensors are exactly those that come up in Latent Dirichlet Allocation problems. Although one can symmetrize a tensor using simple matrix operations (see, e.g., [1]), we cannot do this in sublinear time. In orthogonal tensor decompostion of a symmetric matrix, there is an underlying n× n · · ·n tensor T∗ = ∑k i=1 λiv ⊗p i , and the input tensor is T = T ∗+E, where ‖E ‖2 ≤ . We have λ1 > λ2 > · · · > λk > 0 and that {vi}ki=1 is a set of orthonormal vectors. The goal is to reconstruct approximations v̂i to the vectors vi, and approximations λ̂i to the λi. Our results naturally generalize to tensors with different lengths in different dimensions. For simplicity, we first focus on order p = 3. In the robust tensor power method [1], one generates a random initial vector u, and performs T update steps û = T(I, u, u)/‖T(I, u, u)‖2, where T(I, u, u) = [ n∑ j=1 n∑ `=1 T1,j,` uju`, n∑ j=1 n∑ `=1 T2,j,` uju`, · · · , n∑ j=1 n∑ `=1 Tn,j,` uju` ] . The matrices T1,∗,∗, . . . ,Tn,∗,∗ are referred to as the slices. The vector û typically converges to the top eigenvector in a small number of iterations, and one often chooses a small number L of random initial vectors to boost confidence. Successive eigenvectors can be found by deflation. The algorithm and analysis immediately extend to higher order tensors. We use `2-sampling to estimate T(I, u, u). To achieve the same guarantees as in [23], for typical settings of parameters (constant k and several eigenvalue assumptions) naïvely one needs to take O(n2) `2-samples from u for each slice in each iteration, resulting in Ω(n3) time and destroying our sublinearity. We observe that if we additionally knew the squared norms ‖T1,∗,∗ ‖2F , . . . , ‖Tn,∗,∗ ‖2F , then we could take O(n2) `2-samples in total, where we take ‖Ti,∗,∗ ‖2F ‖T ‖2F ·O(n2) `2-samples from the i-th slice in expectation. Perhaps in some applications such norms are known or cheap to compute in a single pass, but without further assumptions, how can one obtain such norms in sublinear time? If T is a symmetric tensor, then Tj,j,j = ∑k i=1 λiv 3 i,j + Ej,j,j . Note that if there were no noise, then we could read off approximations to the slice norms, since ‖Tj,∗,∗ ‖2F = ∑k i=1 λ 2 i v 2 i,j , and so T 2/3 j,j,j is an approximation to ‖Tj,∗,∗ ‖2F up to factors depending on k and the eigenvalues. However, there is indeed noise. To obtain non-trivial guarantees, the robust tensor power method assumes ‖E ‖2 = O(1/n), where ‖E ‖2 = sup ‖u‖2=‖v‖2=‖w‖2=1 E(u, v, w) = sup ‖u‖2=‖v‖2=‖w‖2=1 n∑ i=1 n∑ j=1 n∑ k=1 Ei,j,k uivjwk, which in particular implies |Ej,j,j | = O(1/n). This assumption comes from the Θ(1/ √ n)correlation of the random initial vector to v1. This noise bound does not trivialize the problem; indeed, Ej,j,j can be chosen adversarially subject to |Ej,j,j | = O(1/n), and if the vi were random unit vectors and the λi and k were constant, then ∑k i=1 λiv 3 i,j = O(1/n 3/2), which is small enough to be completely masked by the noise Ej,j,j . Nevertheless, there is a lot of information about the slice norms. Indeed, suppose k = 1, λ1 = Θ(1), and ‖T ‖F = 1. Then Tj,j,j = Θ(v31,j) + Ej,j,j , and one can show ‖Tj,∗,∗ ‖2F = λ21v21,j ± O(1/n). Again using that |Ej,j,j | = O(1/n), this implies ‖Tj,∗,∗ ‖2F = ω(n−2/3) if and only if Tj,j,j = ω(1/n), and therefore one would notice this by reading Tj,j,j . There can only be o(n2/3) slices j for which ‖Tj,∗,∗ ‖2F = ω(n−2/3), since ‖T ‖2F = 1. Therefore, for each of them we can afford to take O(n2) `2-samples and still have an O(n2+2/3) = o(n3) sublinear running time. The remaining slices all have ‖Tj,∗,∗ ‖2F = O(n−2/3), and therefore if we also take O(n1/3) `2-samples from every slice, we will also estimate the contribution to T(I, u, u) from these slices well. This is also a sublinear O(n2+1/3) number of samples. While the previous paragraph illustrates the idea for k = 1, for k = 2 we need to read more than the Tj,j,j entries to decide how many `2-samples to take from a slice. The analysis is more complicated because of sign cancellations. Even for k = 2 we could have Tj,j,j = λ1v31,j + λ2v 3 2,j + Ej,j,j , and if v1,j = −v2,j then we may not detect that ‖Tj,∗,∗ ‖2F is large. We fix this by also reading the entries Ti,j,j ,Tj,i,j , and Tj,j,i for every i and j. This is still only O(n2) entries and so we are still sublinear time. Without additional assumptions, we only give a formal analysis of this for k ∈ {1, 2}. More importantly, if instead of third-order symmetric tensors we consider p-th order symmetric tensors for even p, we do not have such sign cancellations. In this case we do not have any restrictions on k for estimating slice norms. One does need to show after deflation, the slice norms can still be estimated; this holds because the eigenvectors and eigenvalues are estimated sufficiently well. We also give several per-iteration optimizations of our algorithm, based on careful implementations of generating a sorted list of random numbers and random permutations. We find empirically (see below) that we are much faster per iteration than previous sketching algorithms, in addition to not having to read the entire input tensor in a preprocessing step. Asymmetric Tensors: For asymmetric tensors, e.g., 3rd-order tensors of the form ∑k i=1 λiui⊗ vi⊗ wi, it is impossible to achieve sublinear time in general, since it is hard to distinguish T = ei⊗ej⊗ek for random i, j, k ∈ {1, 2, . . . , n} from T = 0⊗3. We make a necessary and sufficient assumption that all the entries of the ui are less than n−γ for an arbitrarily small constant γ > 0. In this case, all slice norms are o(n−γ) and by taking O(n2−γ) samples from each slice we achieve sublinear time. We can also apply such an assumption to symmetric tensors. Empirical Results: One of the main strengths of our work is our empirical results. In each iteration we approximate T(I, u, u) a total of B times independently and take the median to increase our confidence. In the notation of [23], B corresponds to the number of independent sketches used. While the median works empirically, there are some theoretical issues with it discussed in Remark 4. Also let b be the total number of `2-samples we take per iteration, which corresponds to the sketch size in the notation of [23]. We found that empirically we can set B and b to be much smaller than that in [23] and achieve the same error guarantees. One explanation for this is that the variance bound we obtain via importance sampling is a factor of 43 = 64 smaller than in [23], and for p-th order tensors, a factor of 4p smaller. To give an idea of how much smaller we can set b andB, to achieve roughly the same squared residual norm error on the synthetic data sets of dimension 1200 for finding a good rank-1 approximation, the algorithm of [23] would need to set parameters b = 216 and B = 50, whereas we can set b = 10× 1200 and B = 5. Our running time is 2.595 seconds and we have no preprocessing time, whereas the algorithm of [23] has a running time of 116.3 seconds and 55.34 seconds of preprocessing time. We refer the reader to Table 1 in Section 3. In total we are over 50 times faster. We also demonstrate our algorithm in a real-world application using real datasets, even when the datasets are sparse. Namely, we consider a spectral algorithm for Latent Dirichlet Allocation [1, 2] which uses tensor decomposition as its core computational step. We show a significant speedup can be achieved on tensors occurring in applications such as LDA, and we refer the reader to Table 2 in Section 3. For example, on the wiki [23] dataset with a tensor dimension of 200, we run more than 5 times faster than the sketching-based method. Previous Sampling Algorithms: Previous sampling-based schemes of [17, 4] do not achieve our guarantees, because [17] uses uniform sampling, which does not work for tensors with spiky elements, while the non-uniform sampling in [4] requires touching all of the entries in the tensor and making two passes over it. Notation Let [n] denote {1, 2, . . . , n}. Let ⊗ denote the outer product, and u⊗3 = u ⊗ u ⊗ u. Let T ∈ Rnp , where p is the order of tensor T and n is the dimension of tensor T. Let 〈A,B〉 denote the entry-wise inner product between two tensors A,B ∈ Rnp , e.g., 〈A,B〉 = ∑n i1=1 ∑n i2=1 · · · ∑n ip=1 Ai1,i2,··· ,ip ·Bi1,i2,··· ,ip . For a tensor A ∈ Rn p , ‖A ‖F = ( ∑n i1=1 ∑n i2=1 · · · ∑n ip=1 A2i1,··· ,ip) 1 2 . For random variable X let E[X] denote its expectation of X and V[X] its variance (if these quantities exist). 2 Main Results We explain the details of our main results in this section. First, we state the importance sampling lemmas for our tensor application. Second, we explain how to quickly produce a list of random tuples according to a certain distribution needed by our algorithm. Third, we combine the first and the second parts to get a fast way of approximating tensor contractions, which are used as subroutines in each iteration of the robust tensor power method. We then provide our main theoretical results, and how to estimate the slice norms needed by our main algorithm. Importance sampling lemmas. Approximating an inner product is a simple application of importance sampling. Tensor contraction T(u, v, w) can be regarded as the inner product between two n3-dimensional vectors, and thus importance sampling can be applied. Lemma 1 suggests that we can take a few samples according to their importance, e.g., we can sample Ti,j,k uivjwk with probability |uivjwk|2/‖u‖22‖v‖22‖w‖22. As long as the number of samples is large enough, it will approximate the true tensor contraction ∑ i ∑ j ∑ kTi,j,k uivjwk with small variance after a final rescaling. Lemma 1. Suppose random variable X = Ti,j,k uivjwk/(piqjrk) with probability piqjrk where pi = |ui|2/‖u‖22, qj = |vj |2/‖v‖22, and rk = |wk|2/‖w‖22, and we take L i.i.d. samples of X , denoted X1, X2, · · · , XL. Let Y = 1L ∑L `=1X`. Then (1) E[Y ] = 〈T, u ⊗ v ⊗ w〉, and (2) V[Y ] ≤ 1L‖T ‖ 2 F · ‖u⊗ v ⊗ w‖2F . Similarly, we also have importance sampling for each slice Ti,∗,∗, i.e., “face” of T. Lemma 2. For all i ∈ [n], suppose random variable Xi = Ti,j,k vjwk/(qjrk) with probability qjrk, where qj = |vj |2/‖v‖22 and rk = |wk|2/‖w‖22, and we take Li i.i.d. samples of Xi, say Xi1, X i 2, · · · , XiLi . Let Y i = 1Li ∑L `=1X i ` . Then (1) E[Y i] = 〈Ti,∗,∗, v ⊗ w〉 and (2) V[Y i] ≤ 1 Li ‖Ti,∗,∗ ‖2F ‖v ⊗ w‖2F . Generating importance samples in linear time. We need an efficient way to sample indices of a vector based on their importance. We view this problem as follows: imagine [0, 1] is divided into z “bins” with different lengths corresponding to the probability of selecting each bin, where z is the number of indices in a probability vector. We generate m random numbers uniformly from [0, 1] and see which bin each random number belongs to. If a random number is in bin i, we sample the i-th index of a vector. There are known algorithms [6, 19] to solve this problem in O(z +m) time. We give an alternative algorithm GENRANDTUPLES. Our algorithm combines Bentley and Saxe’s algorithm [3] for efficiently generating m sorted random numbers in O(m) time, and Knuth’s shuffling algorithm [12] for generating a random permutation of [m] in O(m) time. We use the notation CUMPROB(v, w) and CUMPROB(u, v, w) for the algorithm creating the distributions on Rn2 and Rn3 of Lemma 2 and Lemma 1, respectively. We note that naïvely applying previous algorithms would require z = O(n2) and z = O(n3) time to form these two distributions, but we can take O(m) samples from them implicitly in O(n+m) time. Fast approximate tensor contractions. We propose a fast way to approximately compute tensor contractions T(I, v, w) and T(u, v, w) with a sublinear number of samples of T, as shown in Alogrithm 1 and Algorithm 2. Naïvely computing tensor contractions using all of the entries of T gives an exact answer but could take n3 time. Also, to keep our algorithm sublinear time, we never explicitly compute the deflated tensor; rather we represent it implicitly and sample from it. Algorithm 1 Subroutine for approximate tensor contraction T(I, v, w) 1: function APPROXTIVW(T, v, w, n,B, {b̂i}) 2: q̃, r̃ ← CUMPROB(v, w) 3: for d = 1→ B do 4: L ← GENRANDTUPLES( ∑n i=1 b̂i, q̃, r̃) 5: for i = 1→ n do 6: s(d)i ← 0 7: for ` = 1→ b̂i do 8: (j, k)← L(i−1)b+` 9: s(d)i ← s (d) i + 1 qjrk Ti,j,k ·uj · uk 10: T̂(I, v, w)i ← median d∈[B] s (d) i /b̂i, ∀i ∈ [n] 11: return T̂(I, v, w) Algorithm 2 Subroutine for approximate tensor contraction T(u, v, w) 1: function APPROXTUVW(T, u, v, w, n,B, b̂) 2: p̃, q̃, r̃ ← CUMPROB(u, v, w) 3: for d = 1→ B do 4: L ← GENRANDTUPLES(̂b, p̃, q̃, r̃). 5: s(d) ← 0 6: for (i, j, k) ∈ L do 7: s(d) ← s(d) + 1piqjrk Ti,j,k ·ui · uj · uk 8: s(d) ← s(d)/b̂ 9: T̂(u, v, w)← median d∈[B] s(d) 10: return T̂(u, v, w) The following theorem gives the error bounds of APPROXTIVW and APPROXTUVW (in Algorithm 1 and 2). Let b̂i be the number samples we take from slice i ∈ [n] in APPROXTIVW, and let b̂ denote the total number of samples in our algorithm. Theorem 3. For T ∈ Rn×n×n and u ∈ Rn with ‖u‖2 = 1, define the number ε1,T(u) = T̂(u, u, u) − T(u, u, u) and the vector ε2,T(u) = T̂(I, u, u) − T(I, u, u). For any b > 0, if b̂i & b‖Ti,∗,∗ ‖2F /‖T ‖2F then the following bounds hold 1: E[|ε1,T(u)|2] = O(‖T ‖2F /b), and E[‖ε2,T(u)‖22] = O(n‖T ‖2F /b). In addition, for any fixed ω ∈ Rn with ‖ω‖2 = 1, E[〈ω, ε2,T (u)〉2] = O(‖T ‖2F /b). (1) Eq. (1) can be obtained by observing that each random variable [ε2,T(u)]i is independent and so V[〈ω, ε2,T(u)〉] = ∑n i=1 ω 2 i ‖Ti,∗,∗ ‖2F b̂i . ( ∑n i=1 ω 2 i ) ‖T ‖2F b = ‖T ‖2F b . Remark 4. In [23], the coordinate-wise median of B estimates to the T(I, v, w) is used to boost the success probability. There appears to be a gap [21] in their argument as it is unclear how to achieve (1) after taking a coordinate-wise median, which is (7) in Theorem 1 of [23]. To fix this, we instead pay a factor proportional to the number of iterations in Algorithm 3 in the sample complexity b̂. Since we have expectation bounds on the quantities in Theorem 3, we can apply a Markov bound and a union bound across all iterations. This suffices for our main theorem concerning sublinear time below. One can obtain high probability bounds by running Algorithm 3 multiple times independently, and taking coordinate-wise medians of the output eigenvectors. Empirically, our algorithm works even if we take the median in each iteration, which is done in line 10 in Algorithm 1. Replacing Theorem 1 in [23] by our Theorem 3, the rest of the analysis in [23] is unchanged. Our Algorithm 3 is the same as the sketching-based robust tensor power method in [23], except for lines 10, 12, 15, and 17, where the sketching-based approximate tensor contraction is replaced by our importance sampling procedures APPROXTUVW and APPROXTIVW. Rather than use Theorem 2 of Wang et al. [23], the main theorem concerning the correctness of the robust tensor decomposition algorithm, we use a recent improvement of it by Wang and Anandkumar in Theorems 4.1 and 4.2 of [22], which states general guarantees for any algorithm satisfying per iteration noise guarantees. These theorems also remove many of the earlier eigenvalue assumptions in Theorem 2 of [23]. Theorem 5. (Theorem 4.1 and 4.2 of [22]), Suppose T = T∗+E, where T = ∑k i=1 λiv ⊗3 i with λi > 0 and orthonormal basis vectors {v1, . . . , vk} ⊆ Rn, n ≥ k. Let λmax, λmin be the largest and smallest values in {λi}ki=1 and {λ̂i, v̂i}ki=1 be outputs of the robust tensor power method. There exist absolute constants K0, C0, C1, C2, C3 > 0 such that if E satisfies ‖E(I, u(τ)t , u (τ) t )‖2 ≤ , |E(vi, u (τ) t , u (τ) t )| ≤ min{ / √ k,C0λmin/n}, (2) 1For two functions f, g, we use the shorthand f . g (resp. &) to indicate that f ≤ Cg (resp. ≥) for some absolute constant C. Algorithm 3 Our main algorithm 1: function IMPORTANCESAMPLINGRB(T, n,B, b) 2: if si are known, where ‖Ti,∗,∗ ‖2F . si then 3: b̂i ← b · si/‖T ‖2F ,∀i ∈ [n] 4: else 5: b̂i ← b/n, ∀i ∈ [n] 6: b̂ = ∑n i=1 b̂i 7: for ` = 1→ L do 8: u(`) ←INITIALIZATION 9: for t = 1→ T do 10: u(`) ← APPROXTIVW(T, u(`), u(`), n,B, {b̂i}) 11: u(`) ← u(`)/‖u(`)‖2 12: λ(`) ← APPROXTUVW(T, u(`), u(`), u(`), n,B, b̂) 13: `∗ ← arg max`∈[L] λ(`), u∗ ← u(` ∗) 14: for t = 1→ T do 15: u∗ ← APPROXTIVW(T, u∗, u∗, n,B, {b̂i}) 16: u∗ ← u∗/‖u∗‖2 17: λ∗ ← APPROXTUVW(T, u∗, u∗, u∗, n,B, b̂) 18: return λ∗, u∗ for all i ∈ [k], t ∈ [T ], and τ ∈ [L] and furthermore ≤ C1 · λmin/ √ k, T = Ω(log(λmaxn/ )), L ≥ max{K0, k} log(max{K0, k}), then with probability at least 9/10, there exists a permutation π : [k]→ [k] such that |λi − λ̂π(i)| ≤ C2 , ‖vi − v̂π(i)‖2 ≤ C3 /λi, ∀i = 1, · · · , k. Combining the previous theorem with our importance sampling analysis, we obtain: Theorem 6 (Main). Assume the notation of Theorem 5. For each j ∈ [k], suppose we take b̂(j) =∑n i=1 b̂ (j) i samples during the power iterations for recovering λ̂j and v̂j , the number of samples for slice i is b̂(j)i & bkT‖[T− ∑j−1 l=1 λ̂lv̂ ⊗3 l ]i,∗,∗‖2F /‖T− ∑j−1 l=1 λ̂lv̂ ⊗3 l ‖2F where b & n‖T ‖2F / 2 + ‖T ‖2F /min{ / √ k, λmin/n}2. Then the output guarantees of Theorem 5 hold for Algorithm 3 with constant probability. Our total time is O(LTk2b̂) and the space is O(nk), where b̂ = maxj∈[k] b̂(j). In Theorem 3, if we require b̂i = b‖Ti,∗,∗ ‖2F /‖T ‖2F , we need to scan the entire tensor to compute ‖Ti,∗,∗ ‖2F , making our algorithm not sublinear. With the following mild assumption in Theorem 7, our algorithm is sublinear when sampling uniformly (̂bi = b/n) without computing ‖Ti,∗,∗ ‖2F : Theorem 7 (Bounded slice norm). There is a constant α > 0, a constant β ∈ (0, 1] and a sufficiently small constant γ > 0, such that, for any 3rd order tensor T = T∗+E ∈ Rn3 with rank(T∗) ≤ nγ , λk ≥ 1/nγ , if ‖Ti,∗,∗ ‖2F ≤ 1nβ ‖T ‖ 2 F for all i ∈ [n], and E satisfies (2), then Algorithm 3 runs in O(n3−α) time. The condition β ∈ (0, 1] is a practical one. When β = 1, all tensor slices have equal Frobenius norm. The case β = 0 only occurs when ‖Ti,∗,∗ ‖F = ‖T ‖F ; i.e., all except one slice is zero. This theorem can also be applied to asymmetric tensors, since the analysis in [23] can be extended to them. For certain cases, we can remove the bounded slice norm assumption. The idea is to take a sublinear number of samples from the tensor to obtain upper bounds on all slice norms. In the full version, we extend the algorithm and analysis of the robust tensor power method to p > 3 by replacing contractions T(u, v, w) and T(I, v, w) with T(u1, u2, · · · , up) and T(I, u2, · · · , up). As outlined in Section 1, when p is even, because we do not have sign cancellations we can show: Theorem 8 (Even order). There is a constant α > 0 and a sufficiently small constant γ > 0, such that, for any even order-p tensor T = T∗+E ∈ Rnp with rank(T∗) ≤ nγ , p ≤ nγ and λk ≥ 1/nγ . For any sufficiently large constant c0, there exists a sufficiently small constant c > 0, for any ∈ (0, cλk/(c0p2kn(p−2)/2)) if E satisfies ‖E ‖2 ≤ /(c0 √ n), Algorithm 3 runs in O(np−α) time. As outlined in Section 1, for p = 3 and small k we can take sign considerations into account: Theorem 9 (Low rank). There is a constant α > 0 and a sufficiently small constant γ > 0 such that for any symmetric tensor T = T∗+E ∈ Rn3 with E satisfying (2), rank(T∗) ≤ 2, and λk ≥ 1/nγ , then Algorithm 3 runs in O(n3−α) time. 3 Experiments 3.1 Experiment Setup and Datasets Our implementation shares the same code base 1 as the sketching-based robust tensor power method proposed in [23]. We ran our experiments on an i7-5820k CPU with 64 GB of memory in singlethreaded mode. We ran two versions of our algorithm: the version with pre-scanning scans the full tensor to accurately measure per-slice Frobenius norms and make samples for each slice in proportion to its Frobenius norm in APPROXTIVW; the version without pre-scanning assumes that the Frobenius norm of each slice is bounded by 1nα ‖T ‖ 2 F , α ∈ (0, 1] and uses b/n samples per slice, where b is the total number of samples our algorithm makes, analogous to the sketch length b in [23]. Synthetic datasets. We first generated an orthonormal basis {vi}ki=1 and then computed the synthetic tensor as T∗ = ∑k i=1 λiv ⊗3 i , with λ1 ≥ · · · ≥ λk. Then we normalized T ∗ such that ‖T∗ ‖F = 1, and added a symmetric Gaussian noise tensor E where Eijl ∼ N (0, σn1.5 ) for i ≤ j ≤ l. Then σ controls the noise-to-signal ratio and we kept it as 0.01 in all our synthetic tensors. For the eigenvalues λi, we generated three different decays: inverse decay λi = 1i , inverse square decay λi = 1 i2 , and linear decay λi = 1− i−1 k . We also set k = 100 when generating tensors, since higher rank eigenvalues were almost indistinguishable from the added noise. To show the scalability of our algorithm, we generated tensors with different dimensions: n = 200, 400, 600, 800, 1000, 1200. Real-life datasets. Latent Dirichlet Allocation [5] (LDA) is a powerful generative statistical model for topic modeling. A spectral method has been proposed to solve LDA models [1, 2] and the most critical step in spectral LDA is to decompose a symmetric K × K × K tensor with orthogonal eigenvectors, where K is the number of modeled topics. We followed the steps in [1, 18] and built a K ×K ×K tensor TLDA for each dataset, and then ran our algorithms directly on TLDA to see how it works on those tensors in real applications. In our experiments we keep K = 200. We used the two same datasets as the previous work [23]: Wiki and Enron, as well as four additional real-life datasets. We refer the reader to our GitHub repository 2 for our code and full results. 3.2 Results We considered running time and the squared residual norm to evaluate the performance of our algorithms. Given a tensor T ∈ Rn3 , let ‖T− ∑k i=1 λiui ⊗ vi ⊗ wi‖2F denote the squared residual norm where {(λ1, u1, v1, w1), · · · , (λk, uk, vk, wk)} are the eigenvalue/eigenvectors obtained by the robust power method. To reduce the experiment time we looked only for the first eigenvalue and eigenvector, but our algorithm is capable of finding any number of eigenvalues/eigenvectors. We list the pre-scanning time as preprocessing time in tables. It only depends on the tensor dimension n and unlike the sketching based method, it does not depend on b. Pre-scanning time is very short, because it only requires one pass of sequential access to the tensor which is very efficient on hardware. Sublinear time verification. Our theoretical result suggests the total number of samples bno-prescan for our algorithm without pre-scanning is n1−α(α ∈ (0, 1]) times larger than bprescan for our algorithm with pre-scanning. But in experiments we observe that when bno-prescan = bprescan both algorithms achieve very similar accuracy, indicating that in practice α ≈ 1. Synthetic datasets. We ran our algorithm on a large number of synthetic tensors with different dimensions and different eigengaps. Table 1 shows results for a tensor with 1200 dimensions with 100 non-zero eigenvalues decaying as λi = 1i2 . To reach roughly the same residual norm, the running time of our algorithm is over 50 times faster than that of the sketching-based robust tensor power method, thanks to the fact that we usually need a relatively small B and b to get a good residual, and the hidden constant factor in the running time of sampling is much smaller than that of sketching. Our algorithm scales well on large tensors due to its sub-linear nature. In Figure 1(a), for the sketching-based method we kept b = 216, B = 30 for n ≤ 800 and B = 50 for n > 800 (larger n requires more sketches to observe a reasonable recovery). For our algorithm, we chose b and B such 1http://yining-wang.com/fftlda-code.zip 2https://github.com/huanzhang12/sampling_tensor_decomp/ that for each n, our residual norm is on-par or better than the sketching-based method. Our algorithm needs much less time than the sketching-based one over all dimensions. Another advantage of our algorithm is that there are zero or very minimal preprocessing steps. In Figure 1(b), we can see how the preprocessing time grows to prepare sketches when the dimension increases. For applications where only the first few eigenvectors are needed, the preprocessing time could be a large overhead. Real-life datasets Due to the small tensor dimension (200), our algorithm shows less speedup than the sketching-based method. But it is still 2 ∼ 6 times faster in each of the six real-life datasets, achieving the same squared residual norm. Table 2 reports results for one of the datasets in many different settings of (b, B). Like in synthetic datasets, we also empirically observe that the constant b in importance sampling is much smaller than the b used in sketching to get the same error guarantee.
1. What is the focus and contribution of the paper regarding orthogonal tensor decomposition? 2. What are the strengths and weaknesses of the proposed approach, particularly its key insight and the appealed work? 3. Do you have any questions or concerns about the proof, experiment results, or minor issues in the paper? 4. How does the reviewer assess the novelty, practicality, and impact of the paper's content?
Review
Review The authors provide an approach towards construct a decomposition for an orthogonal tensor, in time sublinear in the size of the tensor. Their key insight is that the 'power iteration for tensors' involves multiplying the tensor with another tensor that has a compact implicit representation -- this allows us to estimate the product without reading the whole tensor. Given their algorithm for estimating this product, they appeal to the work of Wang et al. to show that such a noisy estimate still allows you to provably recover the tensor decomposition. The authors then present extensive experiments (synthetic and read -- tensors constructed from LDA models) to prove that this leads to practical speed ups over the previous result of Wang et al.1. The core observation that the tensor products can be estimated quickly is insightful, and leads to immediate improvements in the existing algorithms for orthogonal tensor decomposition. The proofs are pretty straightforward. 2. The result about drawing m samples from a distribution on n items in O(m+n) time, though nice, is a very old result. See Alastair J. Walker. An efficient method for generating discrete random variables with general distributions. ACM Trans. Math. Softw., 3(3):253–256, September 1977. Or for a modern restatement of the result: see Karl Bringmann and Konstantinos Panagiotou. Efficient sampling methods for discrete distri- butions. In Automata, Languages, and Programming, pages 133–144. Springer, 2012. 3. There are several typos / minor issues in the paper: - Lemma 3 — It should be ‘with probability pqr’ not 1/pqr - Lemma 4 — Missing probability - Lemma 8 : The claim for hat(b) = nb requires more justification - Lemma 9 : In the variance expression, a 1/b factor is missing. - Theorem 10: Doesn’t the algorithm require O(nk) space? For storing the previous factors that have been computed. - Fig 1b : The scale on y-axis is unclear. 4. I like the key idea, and the speedup is very impressive in the initial experiments reported in the main paper. However, these speedups become less so in the other synthetic examples reported in the supplementary material (A factor of 2-3, which btw, is still impressive). However, for the real datasets, it is bothersome that somehow the residual error achieved using the sketching based approach seems to be a little better than the importance sampling based approach (this paper), though consistently, without a significant loss of running time. It would be very instructive if the authors could explain this behavior.
NIPS
Title Sublinear Time Orthogonal Tensor Decomposition Abstract A recent work (Wang et. al., NIPS 2015) gives the fastest known algorithms for orthogonal tensor decomposition with provable guarantees. Their algorithm is based on computing sketches of the input tensor, which requires reading the entire input. We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i.e., even without reading most of the input tensor. Instead of using sketches to estimate inner products in tensor decomposition algorithms, we use importance sampling. To achieve sublinear time, we need to know the norms of tensor slices, and we show how to do this in a number of important cases. For symmetric tensors T = ∑k i=1 λiu ⊗p i with λi > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below ‖T ‖F then sublinear time is again possible. One of the main strengths of our work is empirical in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy. N/A ∑k i=1 λiu ⊗p i with λi > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below ‖T ‖F then sublinear time is again possible. One of the main strengths of our work is empirical - in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy. 1 Introduction Tensors are a powerful tool for dealing with multi-modal and multi-relational data. In recommendation systems, often using more than two attributes can lead to better recommendations. This could occur, for example, in Groupon where one could look at users, activities, and time (season, time of day, weekday/weekend, etc.), as three attributes to base predictions on (see [13] for a discussion). Similar to low rank matrix approximation, we seek a tensor decomposition to succinctly store the tensor and to apply it quickly. A popular decomposition method is the canonical polyadic decomposition, i.e., the CANDECOMP/PARAFAC (CP) decomposition, where the tensor is decomposed into a sum of rank-1 components [9]. We refer the reader to [23], where applications of CP including data mining, computational neuroscience, and statistical learning for latent variable models are mentioned. A natural question, given the emergence of large data sets, is whether such decompositions can be performed quickly. There are a number of works on this topic [17, 16, 7, 11, 10, 4, 20]. Most related to ours are several recent works of Wang et al. [23] and Tung et al. [18], in which it is shown how to significantly speed up this decomposition for orthogonal tensor decomposition using the randomized technique of linear sketching [15]. In this work we also focus on orthogonal tensor decomposition. The idea in [23] is to create a succinct sketch of the input tensor, from which one can then perform implicit tensor decomposition by approximating inner products in existing decomposition methods. Existing methods, like the power method, involve computing the inner product of a vector, which is now a rank-1 matrix, with another vector, which is now a slice of a tensor. Such inner products can ∗Full version appears on arXiv, 2017. ‡Work done while visiting IBM Almaden. †Supported by XDATA DARPA Air Force Research Laboratory contract FA8750-12-C-0323. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. be approximated much faster by instead computing the inner product of the sketched vectors, which have significantly lower dimension. One can also replace the sketching with sampling to approximate inner products; we discuss some sampling schemes [17, 4] below and compare them to our work. 1.1 Our Contributions We show in a number of important cases, one can achieve the same theoretical guarantees in the work of Wang et al. [23] (which was applied later by Tung et al. [18]), in sublinear time, that is, without reading most of the input tensor. While previous work needs to walk through the input at least once to create a sketch, we show one can instead perform importance sampling of the tensor based on the current iterate, together with reading a few entries of the tensor which help us learn the norms of tensor slices. We use a version of `2-sampling for our importance sampling. One source of speedup in our work and in Wang et al. [23] comes from approximating inner products in iterations in the robust tensor power method (see below). To estimate 〈u, v〉 for n-dimensional vectors u and v, their work computes sketches S(u) and S(v) and approximates 〈u, v〉 ≈ 〈S(u), S(v)〉. Instead, if one has u, one can sample coordinates i proportional to u2i , which is known as `2-sampling [14, 8]. One estimates 〈u, v〉 as vi‖u‖ 2 2 ui , which is unbiased and has variance O(‖u‖22‖v‖22). These guarantees are similar to those using sketching, though the constants are significantly smaller (see below), and unlike sketching, one does not need to read the entire tensor to perform such sampling. Symmetric Tensors: As in [23], we focus on orthogonal tensor decomposition of symmetric tensors, though we explain the extension to the asymmetric case below. Symmetric tensors arise in engineering applications, for example, to represent the symmetric tensor field of stress, strain, and anisotropic conductivity. Another example is diffusion MRI in which one uses symmetric tensors to describe diffusion in the brain or other parts of the body. In spectral methods symmetric tensors are exactly those that come up in Latent Dirichlet Allocation problems. Although one can symmetrize a tensor using simple matrix operations (see, e.g., [1]), we cannot do this in sublinear time. In orthogonal tensor decompostion of a symmetric matrix, there is an underlying n× n · · ·n tensor T∗ = ∑k i=1 λiv ⊗p i , and the input tensor is T = T ∗+E, where ‖E ‖2 ≤ . We have λ1 > λ2 > · · · > λk > 0 and that {vi}ki=1 is a set of orthonormal vectors. The goal is to reconstruct approximations v̂i to the vectors vi, and approximations λ̂i to the λi. Our results naturally generalize to tensors with different lengths in different dimensions. For simplicity, we first focus on order p = 3. In the robust tensor power method [1], one generates a random initial vector u, and performs T update steps û = T(I, u, u)/‖T(I, u, u)‖2, where T(I, u, u) = [ n∑ j=1 n∑ `=1 T1,j,` uju`, n∑ j=1 n∑ `=1 T2,j,` uju`, · · · , n∑ j=1 n∑ `=1 Tn,j,` uju` ] . The matrices T1,∗,∗, . . . ,Tn,∗,∗ are referred to as the slices. The vector û typically converges to the top eigenvector in a small number of iterations, and one often chooses a small number L of random initial vectors to boost confidence. Successive eigenvectors can be found by deflation. The algorithm and analysis immediately extend to higher order tensors. We use `2-sampling to estimate T(I, u, u). To achieve the same guarantees as in [23], for typical settings of parameters (constant k and several eigenvalue assumptions) naïvely one needs to take O(n2) `2-samples from u for each slice in each iteration, resulting in Ω(n3) time and destroying our sublinearity. We observe that if we additionally knew the squared norms ‖T1,∗,∗ ‖2F , . . . , ‖Tn,∗,∗ ‖2F , then we could take O(n2) `2-samples in total, where we take ‖Ti,∗,∗ ‖2F ‖T ‖2F ·O(n2) `2-samples from the i-th slice in expectation. Perhaps in some applications such norms are known or cheap to compute in a single pass, but without further assumptions, how can one obtain such norms in sublinear time? If T is a symmetric tensor, then Tj,j,j = ∑k i=1 λiv 3 i,j + Ej,j,j . Note that if there were no noise, then we could read off approximations to the slice norms, since ‖Tj,∗,∗ ‖2F = ∑k i=1 λ 2 i v 2 i,j , and so T 2/3 j,j,j is an approximation to ‖Tj,∗,∗ ‖2F up to factors depending on k and the eigenvalues. However, there is indeed noise. To obtain non-trivial guarantees, the robust tensor power method assumes ‖E ‖2 = O(1/n), where ‖E ‖2 = sup ‖u‖2=‖v‖2=‖w‖2=1 E(u, v, w) = sup ‖u‖2=‖v‖2=‖w‖2=1 n∑ i=1 n∑ j=1 n∑ k=1 Ei,j,k uivjwk, which in particular implies |Ej,j,j | = O(1/n). This assumption comes from the Θ(1/ √ n)correlation of the random initial vector to v1. This noise bound does not trivialize the problem; indeed, Ej,j,j can be chosen adversarially subject to |Ej,j,j | = O(1/n), and if the vi were random unit vectors and the λi and k were constant, then ∑k i=1 λiv 3 i,j = O(1/n 3/2), which is small enough to be completely masked by the noise Ej,j,j . Nevertheless, there is a lot of information about the slice norms. Indeed, suppose k = 1, λ1 = Θ(1), and ‖T ‖F = 1. Then Tj,j,j = Θ(v31,j) + Ej,j,j , and one can show ‖Tj,∗,∗ ‖2F = λ21v21,j ± O(1/n). Again using that |Ej,j,j | = O(1/n), this implies ‖Tj,∗,∗ ‖2F = ω(n−2/3) if and only if Tj,j,j = ω(1/n), and therefore one would notice this by reading Tj,j,j . There can only be o(n2/3) slices j for which ‖Tj,∗,∗ ‖2F = ω(n−2/3), since ‖T ‖2F = 1. Therefore, for each of them we can afford to take O(n2) `2-samples and still have an O(n2+2/3) = o(n3) sublinear running time. The remaining slices all have ‖Tj,∗,∗ ‖2F = O(n−2/3), and therefore if we also take O(n1/3) `2-samples from every slice, we will also estimate the contribution to T(I, u, u) from these slices well. This is also a sublinear O(n2+1/3) number of samples. While the previous paragraph illustrates the idea for k = 1, for k = 2 we need to read more than the Tj,j,j entries to decide how many `2-samples to take from a slice. The analysis is more complicated because of sign cancellations. Even for k = 2 we could have Tj,j,j = λ1v31,j + λ2v 3 2,j + Ej,j,j , and if v1,j = −v2,j then we may not detect that ‖Tj,∗,∗ ‖2F is large. We fix this by also reading the entries Ti,j,j ,Tj,i,j , and Tj,j,i for every i and j. This is still only O(n2) entries and so we are still sublinear time. Without additional assumptions, we only give a formal analysis of this for k ∈ {1, 2}. More importantly, if instead of third-order symmetric tensors we consider p-th order symmetric tensors for even p, we do not have such sign cancellations. In this case we do not have any restrictions on k for estimating slice norms. One does need to show after deflation, the slice norms can still be estimated; this holds because the eigenvectors and eigenvalues are estimated sufficiently well. We also give several per-iteration optimizations of our algorithm, based on careful implementations of generating a sorted list of random numbers and random permutations. We find empirically (see below) that we are much faster per iteration than previous sketching algorithms, in addition to not having to read the entire input tensor in a preprocessing step. Asymmetric Tensors: For asymmetric tensors, e.g., 3rd-order tensors of the form ∑k i=1 λiui⊗ vi⊗ wi, it is impossible to achieve sublinear time in general, since it is hard to distinguish T = ei⊗ej⊗ek for random i, j, k ∈ {1, 2, . . . , n} from T = 0⊗3. We make a necessary and sufficient assumption that all the entries of the ui are less than n−γ for an arbitrarily small constant γ > 0. In this case, all slice norms are o(n−γ) and by taking O(n2−γ) samples from each slice we achieve sublinear time. We can also apply such an assumption to symmetric tensors. Empirical Results: One of the main strengths of our work is our empirical results. In each iteration we approximate T(I, u, u) a total of B times independently and take the median to increase our confidence. In the notation of [23], B corresponds to the number of independent sketches used. While the median works empirically, there are some theoretical issues with it discussed in Remark 4. Also let b be the total number of `2-samples we take per iteration, which corresponds to the sketch size in the notation of [23]. We found that empirically we can set B and b to be much smaller than that in [23] and achieve the same error guarantees. One explanation for this is that the variance bound we obtain via importance sampling is a factor of 43 = 64 smaller than in [23], and for p-th order tensors, a factor of 4p smaller. To give an idea of how much smaller we can set b andB, to achieve roughly the same squared residual norm error on the synthetic data sets of dimension 1200 for finding a good rank-1 approximation, the algorithm of [23] would need to set parameters b = 216 and B = 50, whereas we can set b = 10× 1200 and B = 5. Our running time is 2.595 seconds and we have no preprocessing time, whereas the algorithm of [23] has a running time of 116.3 seconds and 55.34 seconds of preprocessing time. We refer the reader to Table 1 in Section 3. In total we are over 50 times faster. We also demonstrate our algorithm in a real-world application using real datasets, even when the datasets are sparse. Namely, we consider a spectral algorithm for Latent Dirichlet Allocation [1, 2] which uses tensor decomposition as its core computational step. We show a significant speedup can be achieved on tensors occurring in applications such as LDA, and we refer the reader to Table 2 in Section 3. For example, on the wiki [23] dataset with a tensor dimension of 200, we run more than 5 times faster than the sketching-based method. Previous Sampling Algorithms: Previous sampling-based schemes of [17, 4] do not achieve our guarantees, because [17] uses uniform sampling, which does not work for tensors with spiky elements, while the non-uniform sampling in [4] requires touching all of the entries in the tensor and making two passes over it. Notation Let [n] denote {1, 2, . . . , n}. Let ⊗ denote the outer product, and u⊗3 = u ⊗ u ⊗ u. Let T ∈ Rnp , where p is the order of tensor T and n is the dimension of tensor T. Let 〈A,B〉 denote the entry-wise inner product between two tensors A,B ∈ Rnp , e.g., 〈A,B〉 = ∑n i1=1 ∑n i2=1 · · · ∑n ip=1 Ai1,i2,··· ,ip ·Bi1,i2,··· ,ip . For a tensor A ∈ Rn p , ‖A ‖F = ( ∑n i1=1 ∑n i2=1 · · · ∑n ip=1 A2i1,··· ,ip) 1 2 . For random variable X let E[X] denote its expectation of X and V[X] its variance (if these quantities exist). 2 Main Results We explain the details of our main results in this section. First, we state the importance sampling lemmas for our tensor application. Second, we explain how to quickly produce a list of random tuples according to a certain distribution needed by our algorithm. Third, we combine the first and the second parts to get a fast way of approximating tensor contractions, which are used as subroutines in each iteration of the robust tensor power method. We then provide our main theoretical results, and how to estimate the slice norms needed by our main algorithm. Importance sampling lemmas. Approximating an inner product is a simple application of importance sampling. Tensor contraction T(u, v, w) can be regarded as the inner product between two n3-dimensional vectors, and thus importance sampling can be applied. Lemma 1 suggests that we can take a few samples according to their importance, e.g., we can sample Ti,j,k uivjwk with probability |uivjwk|2/‖u‖22‖v‖22‖w‖22. As long as the number of samples is large enough, it will approximate the true tensor contraction ∑ i ∑ j ∑ kTi,j,k uivjwk with small variance after a final rescaling. Lemma 1. Suppose random variable X = Ti,j,k uivjwk/(piqjrk) with probability piqjrk where pi = |ui|2/‖u‖22, qj = |vj |2/‖v‖22, and rk = |wk|2/‖w‖22, and we take L i.i.d. samples of X , denoted X1, X2, · · · , XL. Let Y = 1L ∑L `=1X`. Then (1) E[Y ] = 〈T, u ⊗ v ⊗ w〉, and (2) V[Y ] ≤ 1L‖T ‖ 2 F · ‖u⊗ v ⊗ w‖2F . Similarly, we also have importance sampling for each slice Ti,∗,∗, i.e., “face” of T. Lemma 2. For all i ∈ [n], suppose random variable Xi = Ti,j,k vjwk/(qjrk) with probability qjrk, where qj = |vj |2/‖v‖22 and rk = |wk|2/‖w‖22, and we take Li i.i.d. samples of Xi, say Xi1, X i 2, · · · , XiLi . Let Y i = 1Li ∑L `=1X i ` . Then (1) E[Y i] = 〈Ti,∗,∗, v ⊗ w〉 and (2) V[Y i] ≤ 1 Li ‖Ti,∗,∗ ‖2F ‖v ⊗ w‖2F . Generating importance samples in linear time. We need an efficient way to sample indices of a vector based on their importance. We view this problem as follows: imagine [0, 1] is divided into z “bins” with different lengths corresponding to the probability of selecting each bin, where z is the number of indices in a probability vector. We generate m random numbers uniformly from [0, 1] and see which bin each random number belongs to. If a random number is in bin i, we sample the i-th index of a vector. There are known algorithms [6, 19] to solve this problem in O(z +m) time. We give an alternative algorithm GENRANDTUPLES. Our algorithm combines Bentley and Saxe’s algorithm [3] for efficiently generating m sorted random numbers in O(m) time, and Knuth’s shuffling algorithm [12] for generating a random permutation of [m] in O(m) time. We use the notation CUMPROB(v, w) and CUMPROB(u, v, w) for the algorithm creating the distributions on Rn2 and Rn3 of Lemma 2 and Lemma 1, respectively. We note that naïvely applying previous algorithms would require z = O(n2) and z = O(n3) time to form these two distributions, but we can take O(m) samples from them implicitly in O(n+m) time. Fast approximate tensor contractions. We propose a fast way to approximately compute tensor contractions T(I, v, w) and T(u, v, w) with a sublinear number of samples of T, as shown in Alogrithm 1 and Algorithm 2. Naïvely computing tensor contractions using all of the entries of T gives an exact answer but could take n3 time. Also, to keep our algorithm sublinear time, we never explicitly compute the deflated tensor; rather we represent it implicitly and sample from it. Algorithm 1 Subroutine for approximate tensor contraction T(I, v, w) 1: function APPROXTIVW(T, v, w, n,B, {b̂i}) 2: q̃, r̃ ← CUMPROB(v, w) 3: for d = 1→ B do 4: L ← GENRANDTUPLES( ∑n i=1 b̂i, q̃, r̃) 5: for i = 1→ n do 6: s(d)i ← 0 7: for ` = 1→ b̂i do 8: (j, k)← L(i−1)b+` 9: s(d)i ← s (d) i + 1 qjrk Ti,j,k ·uj · uk 10: T̂(I, v, w)i ← median d∈[B] s (d) i /b̂i, ∀i ∈ [n] 11: return T̂(I, v, w) Algorithm 2 Subroutine for approximate tensor contraction T(u, v, w) 1: function APPROXTUVW(T, u, v, w, n,B, b̂) 2: p̃, q̃, r̃ ← CUMPROB(u, v, w) 3: for d = 1→ B do 4: L ← GENRANDTUPLES(̂b, p̃, q̃, r̃). 5: s(d) ← 0 6: for (i, j, k) ∈ L do 7: s(d) ← s(d) + 1piqjrk Ti,j,k ·ui · uj · uk 8: s(d) ← s(d)/b̂ 9: T̂(u, v, w)← median d∈[B] s(d) 10: return T̂(u, v, w) The following theorem gives the error bounds of APPROXTIVW and APPROXTUVW (in Algorithm 1 and 2). Let b̂i be the number samples we take from slice i ∈ [n] in APPROXTIVW, and let b̂ denote the total number of samples in our algorithm. Theorem 3. For T ∈ Rn×n×n and u ∈ Rn with ‖u‖2 = 1, define the number ε1,T(u) = T̂(u, u, u) − T(u, u, u) and the vector ε2,T(u) = T̂(I, u, u) − T(I, u, u). For any b > 0, if b̂i & b‖Ti,∗,∗ ‖2F /‖T ‖2F then the following bounds hold 1: E[|ε1,T(u)|2] = O(‖T ‖2F /b), and E[‖ε2,T(u)‖22] = O(n‖T ‖2F /b). In addition, for any fixed ω ∈ Rn with ‖ω‖2 = 1, E[〈ω, ε2,T (u)〉2] = O(‖T ‖2F /b). (1) Eq. (1) can be obtained by observing that each random variable [ε2,T(u)]i is independent and so V[〈ω, ε2,T(u)〉] = ∑n i=1 ω 2 i ‖Ti,∗,∗ ‖2F b̂i . ( ∑n i=1 ω 2 i ) ‖T ‖2F b = ‖T ‖2F b . Remark 4. In [23], the coordinate-wise median of B estimates to the T(I, v, w) is used to boost the success probability. There appears to be a gap [21] in their argument as it is unclear how to achieve (1) after taking a coordinate-wise median, which is (7) in Theorem 1 of [23]. To fix this, we instead pay a factor proportional to the number of iterations in Algorithm 3 in the sample complexity b̂. Since we have expectation bounds on the quantities in Theorem 3, we can apply a Markov bound and a union bound across all iterations. This suffices for our main theorem concerning sublinear time below. One can obtain high probability bounds by running Algorithm 3 multiple times independently, and taking coordinate-wise medians of the output eigenvectors. Empirically, our algorithm works even if we take the median in each iteration, which is done in line 10 in Algorithm 1. Replacing Theorem 1 in [23] by our Theorem 3, the rest of the analysis in [23] is unchanged. Our Algorithm 3 is the same as the sketching-based robust tensor power method in [23], except for lines 10, 12, 15, and 17, where the sketching-based approximate tensor contraction is replaced by our importance sampling procedures APPROXTUVW and APPROXTIVW. Rather than use Theorem 2 of Wang et al. [23], the main theorem concerning the correctness of the robust tensor decomposition algorithm, we use a recent improvement of it by Wang and Anandkumar in Theorems 4.1 and 4.2 of [22], which states general guarantees for any algorithm satisfying per iteration noise guarantees. These theorems also remove many of the earlier eigenvalue assumptions in Theorem 2 of [23]. Theorem 5. (Theorem 4.1 and 4.2 of [22]), Suppose T = T∗+E, where T = ∑k i=1 λiv ⊗3 i with λi > 0 and orthonormal basis vectors {v1, . . . , vk} ⊆ Rn, n ≥ k. Let λmax, λmin be the largest and smallest values in {λi}ki=1 and {λ̂i, v̂i}ki=1 be outputs of the robust tensor power method. There exist absolute constants K0, C0, C1, C2, C3 > 0 such that if E satisfies ‖E(I, u(τ)t , u (τ) t )‖2 ≤ , |E(vi, u (τ) t , u (τ) t )| ≤ min{ / √ k,C0λmin/n}, (2) 1For two functions f, g, we use the shorthand f . g (resp. &) to indicate that f ≤ Cg (resp. ≥) for some absolute constant C. Algorithm 3 Our main algorithm 1: function IMPORTANCESAMPLINGRB(T, n,B, b) 2: if si are known, where ‖Ti,∗,∗ ‖2F . si then 3: b̂i ← b · si/‖T ‖2F ,∀i ∈ [n] 4: else 5: b̂i ← b/n, ∀i ∈ [n] 6: b̂ = ∑n i=1 b̂i 7: for ` = 1→ L do 8: u(`) ←INITIALIZATION 9: for t = 1→ T do 10: u(`) ← APPROXTIVW(T, u(`), u(`), n,B, {b̂i}) 11: u(`) ← u(`)/‖u(`)‖2 12: λ(`) ← APPROXTUVW(T, u(`), u(`), u(`), n,B, b̂) 13: `∗ ← arg max`∈[L] λ(`), u∗ ← u(` ∗) 14: for t = 1→ T do 15: u∗ ← APPROXTIVW(T, u∗, u∗, n,B, {b̂i}) 16: u∗ ← u∗/‖u∗‖2 17: λ∗ ← APPROXTUVW(T, u∗, u∗, u∗, n,B, b̂) 18: return λ∗, u∗ for all i ∈ [k], t ∈ [T ], and τ ∈ [L] and furthermore ≤ C1 · λmin/ √ k, T = Ω(log(λmaxn/ )), L ≥ max{K0, k} log(max{K0, k}), then with probability at least 9/10, there exists a permutation π : [k]→ [k] such that |λi − λ̂π(i)| ≤ C2 , ‖vi − v̂π(i)‖2 ≤ C3 /λi, ∀i = 1, · · · , k. Combining the previous theorem with our importance sampling analysis, we obtain: Theorem 6 (Main). Assume the notation of Theorem 5. For each j ∈ [k], suppose we take b̂(j) =∑n i=1 b̂ (j) i samples during the power iterations for recovering λ̂j and v̂j , the number of samples for slice i is b̂(j)i & bkT‖[T− ∑j−1 l=1 λ̂lv̂ ⊗3 l ]i,∗,∗‖2F /‖T− ∑j−1 l=1 λ̂lv̂ ⊗3 l ‖2F where b & n‖T ‖2F / 2 + ‖T ‖2F /min{ / √ k, λmin/n}2. Then the output guarantees of Theorem 5 hold for Algorithm 3 with constant probability. Our total time is O(LTk2b̂) and the space is O(nk), where b̂ = maxj∈[k] b̂(j). In Theorem 3, if we require b̂i = b‖Ti,∗,∗ ‖2F /‖T ‖2F , we need to scan the entire tensor to compute ‖Ti,∗,∗ ‖2F , making our algorithm not sublinear. With the following mild assumption in Theorem 7, our algorithm is sublinear when sampling uniformly (̂bi = b/n) without computing ‖Ti,∗,∗ ‖2F : Theorem 7 (Bounded slice norm). There is a constant α > 0, a constant β ∈ (0, 1] and a sufficiently small constant γ > 0, such that, for any 3rd order tensor T = T∗+E ∈ Rn3 with rank(T∗) ≤ nγ , λk ≥ 1/nγ , if ‖Ti,∗,∗ ‖2F ≤ 1nβ ‖T ‖ 2 F for all i ∈ [n], and E satisfies (2), then Algorithm 3 runs in O(n3−α) time. The condition β ∈ (0, 1] is a practical one. When β = 1, all tensor slices have equal Frobenius norm. The case β = 0 only occurs when ‖Ti,∗,∗ ‖F = ‖T ‖F ; i.e., all except one slice is zero. This theorem can also be applied to asymmetric tensors, since the analysis in [23] can be extended to them. For certain cases, we can remove the bounded slice norm assumption. The idea is to take a sublinear number of samples from the tensor to obtain upper bounds on all slice norms. In the full version, we extend the algorithm and analysis of the robust tensor power method to p > 3 by replacing contractions T(u, v, w) and T(I, v, w) with T(u1, u2, · · · , up) and T(I, u2, · · · , up). As outlined in Section 1, when p is even, because we do not have sign cancellations we can show: Theorem 8 (Even order). There is a constant α > 0 and a sufficiently small constant γ > 0, such that, for any even order-p tensor T = T∗+E ∈ Rnp with rank(T∗) ≤ nγ , p ≤ nγ and λk ≥ 1/nγ . For any sufficiently large constant c0, there exists a sufficiently small constant c > 0, for any ∈ (0, cλk/(c0p2kn(p−2)/2)) if E satisfies ‖E ‖2 ≤ /(c0 √ n), Algorithm 3 runs in O(np−α) time. As outlined in Section 1, for p = 3 and small k we can take sign considerations into account: Theorem 9 (Low rank). There is a constant α > 0 and a sufficiently small constant γ > 0 such that for any symmetric tensor T = T∗+E ∈ Rn3 with E satisfying (2), rank(T∗) ≤ 2, and λk ≥ 1/nγ , then Algorithm 3 runs in O(n3−α) time. 3 Experiments 3.1 Experiment Setup and Datasets Our implementation shares the same code base 1 as the sketching-based robust tensor power method proposed in [23]. We ran our experiments on an i7-5820k CPU with 64 GB of memory in singlethreaded mode. We ran two versions of our algorithm: the version with pre-scanning scans the full tensor to accurately measure per-slice Frobenius norms and make samples for each slice in proportion to its Frobenius norm in APPROXTIVW; the version without pre-scanning assumes that the Frobenius norm of each slice is bounded by 1nα ‖T ‖ 2 F , α ∈ (0, 1] and uses b/n samples per slice, where b is the total number of samples our algorithm makes, analogous to the sketch length b in [23]. Synthetic datasets. We first generated an orthonormal basis {vi}ki=1 and then computed the synthetic tensor as T∗ = ∑k i=1 λiv ⊗3 i , with λ1 ≥ · · · ≥ λk. Then we normalized T ∗ such that ‖T∗ ‖F = 1, and added a symmetric Gaussian noise tensor E where Eijl ∼ N (0, σn1.5 ) for i ≤ j ≤ l. Then σ controls the noise-to-signal ratio and we kept it as 0.01 in all our synthetic tensors. For the eigenvalues λi, we generated three different decays: inverse decay λi = 1i , inverse square decay λi = 1 i2 , and linear decay λi = 1− i−1 k . We also set k = 100 when generating tensors, since higher rank eigenvalues were almost indistinguishable from the added noise. To show the scalability of our algorithm, we generated tensors with different dimensions: n = 200, 400, 600, 800, 1000, 1200. Real-life datasets. Latent Dirichlet Allocation [5] (LDA) is a powerful generative statistical model for topic modeling. A spectral method has been proposed to solve LDA models [1, 2] and the most critical step in spectral LDA is to decompose a symmetric K × K × K tensor with orthogonal eigenvectors, where K is the number of modeled topics. We followed the steps in [1, 18] and built a K ×K ×K tensor TLDA for each dataset, and then ran our algorithms directly on TLDA to see how it works on those tensors in real applications. In our experiments we keep K = 200. We used the two same datasets as the previous work [23]: Wiki and Enron, as well as four additional real-life datasets. We refer the reader to our GitHub repository 2 for our code and full results. 3.2 Results We considered running time and the squared residual norm to evaluate the performance of our algorithms. Given a tensor T ∈ Rn3 , let ‖T− ∑k i=1 λiui ⊗ vi ⊗ wi‖2F denote the squared residual norm where {(λ1, u1, v1, w1), · · · , (λk, uk, vk, wk)} are the eigenvalue/eigenvectors obtained by the robust power method. To reduce the experiment time we looked only for the first eigenvalue and eigenvector, but our algorithm is capable of finding any number of eigenvalues/eigenvectors. We list the pre-scanning time as preprocessing time in tables. It only depends on the tensor dimension n and unlike the sketching based method, it does not depend on b. Pre-scanning time is very short, because it only requires one pass of sequential access to the tensor which is very efficient on hardware. Sublinear time verification. Our theoretical result suggests the total number of samples bno-prescan for our algorithm without pre-scanning is n1−α(α ∈ (0, 1]) times larger than bprescan for our algorithm with pre-scanning. But in experiments we observe that when bno-prescan = bprescan both algorithms achieve very similar accuracy, indicating that in practice α ≈ 1. Synthetic datasets. We ran our algorithm on a large number of synthetic tensors with different dimensions and different eigengaps. Table 1 shows results for a tensor with 1200 dimensions with 100 non-zero eigenvalues decaying as λi = 1i2 . To reach roughly the same residual norm, the running time of our algorithm is over 50 times faster than that of the sketching-based robust tensor power method, thanks to the fact that we usually need a relatively small B and b to get a good residual, and the hidden constant factor in the running time of sampling is much smaller than that of sketching. Our algorithm scales well on large tensors due to its sub-linear nature. In Figure 1(a), for the sketching-based method we kept b = 216, B = 30 for n ≤ 800 and B = 50 for n > 800 (larger n requires more sketches to observe a reasonable recovery). For our algorithm, we chose b and B such 1http://yining-wang.com/fftlda-code.zip 2https://github.com/huanzhang12/sampling_tensor_decomp/ that for each n, our residual norm is on-par or better than the sketching-based method. Our algorithm needs much less time than the sketching-based one over all dimensions. Another advantage of our algorithm is that there are zero or very minimal preprocessing steps. In Figure 1(b), we can see how the preprocessing time grows to prepare sketches when the dimension increases. For applications where only the first few eigenvectors are needed, the preprocessing time could be a large overhead. Real-life datasets Due to the small tensor dimension (200), our algorithm shows less speedup than the sketching-based method. But it is still 2 ∼ 6 times faster in each of the six real-life datasets, achieving the same squared residual norm. Table 2 reports results for one of the datasets in many different settings of (b, B). Like in synthetic datasets, we also empirically observe that the constant b in importance sampling is much smaller than the b used in sketching to get the same error guarantee.
1. What is the focus of the paper regarding tensor contraction and orthogonal tensor decomposition algorithms? 2. What are the strengths and weaknesses of the proposed sketching algorithm compared to FFT sketching? 3. Do you have any concerns about the theoretical analysis and bounds provided in the paper? 4. How does the reviewer assess the empirical evidence and experimental results presented in the paper? 5. Are there any suggestions or recommendations for future improvements or comparisons between the proposed algorithm and other methods?
Review
Review This paper proposes an sketching algorithm to compute tensor contraction via importance sampling, which is one per-iteration step for many orthogonal tensor decomposition algorithms like tensor power iteration, ALS. The authors show that importance sampling has the same (1) & (2) as FFT sketching. (1) concentration bound for tensor contraction (2) error bound for tensor power iteration The main argument of this paper is that IS can achieve same accuracy must faster than FFT sketching, mainly because (i) can use much smaller sampling size than sketch size (ii) FFT has the overhead of sketching the whole tensor For (i), the authors provide some empirical evidence. Yet the theory established in this paper says the two will scale the same -- this suggest that the bound may be suboptimal. Besides, for all experiments on real datasets, the relative error is close to 1. I think it’s hard to assess their performance at this accuracy. How do they compare if we run them longer? For (ii), as they authors mentioned, if the tensor has factor form e.g. empirical moment tensor, the overhead of FFT can be significantly reduced. As before, because the theory shows the sampling and sketching size are same, the time complexity of FFT is actually better. This might lessen the potential impact of the proposed algorithm. To summarize, I think this paper give some empirical support for the effectiveness of IS, yet the theoretical side is not ready.
NIPS
Title Sublinear Time Orthogonal Tensor Decomposition Abstract A recent work (Wang et. al., NIPS 2015) gives the fastest known algorithms for orthogonal tensor decomposition with provable guarantees. Their algorithm is based on computing sketches of the input tensor, which requires reading the entire input. We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i.e., even without reading most of the input tensor. Instead of using sketches to estimate inner products in tensor decomposition algorithms, we use importance sampling. To achieve sublinear time, we need to know the norms of tensor slices, and we show how to do this in a number of important cases. For symmetric tensors T = ∑k i=1 λiu ⊗p i with λi > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below ‖T ‖F then sublinear time is again possible. One of the main strengths of our work is empirical in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy. N/A ∑k i=1 λiu ⊗p i with λi > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below ‖T ‖F then sublinear time is again possible. One of the main strengths of our work is empirical - in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy. 1 Introduction Tensors are a powerful tool for dealing with multi-modal and multi-relational data. In recommendation systems, often using more than two attributes can lead to better recommendations. This could occur, for example, in Groupon where one could look at users, activities, and time (season, time of day, weekday/weekend, etc.), as three attributes to base predictions on (see [13] for a discussion). Similar to low rank matrix approximation, we seek a tensor decomposition to succinctly store the tensor and to apply it quickly. A popular decomposition method is the canonical polyadic decomposition, i.e., the CANDECOMP/PARAFAC (CP) decomposition, where the tensor is decomposed into a sum of rank-1 components [9]. We refer the reader to [23], where applications of CP including data mining, computational neuroscience, and statistical learning for latent variable models are mentioned. A natural question, given the emergence of large data sets, is whether such decompositions can be performed quickly. There are a number of works on this topic [17, 16, 7, 11, 10, 4, 20]. Most related to ours are several recent works of Wang et al. [23] and Tung et al. [18], in which it is shown how to significantly speed up this decomposition for orthogonal tensor decomposition using the randomized technique of linear sketching [15]. In this work we also focus on orthogonal tensor decomposition. The idea in [23] is to create a succinct sketch of the input tensor, from which one can then perform implicit tensor decomposition by approximating inner products in existing decomposition methods. Existing methods, like the power method, involve computing the inner product of a vector, which is now a rank-1 matrix, with another vector, which is now a slice of a tensor. Such inner products can ∗Full version appears on arXiv, 2017. ‡Work done while visiting IBM Almaden. †Supported by XDATA DARPA Air Force Research Laboratory contract FA8750-12-C-0323. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. be approximated much faster by instead computing the inner product of the sketched vectors, which have significantly lower dimension. One can also replace the sketching with sampling to approximate inner products; we discuss some sampling schemes [17, 4] below and compare them to our work. 1.1 Our Contributions We show in a number of important cases, one can achieve the same theoretical guarantees in the work of Wang et al. [23] (which was applied later by Tung et al. [18]), in sublinear time, that is, without reading most of the input tensor. While previous work needs to walk through the input at least once to create a sketch, we show one can instead perform importance sampling of the tensor based on the current iterate, together with reading a few entries of the tensor which help us learn the norms of tensor slices. We use a version of `2-sampling for our importance sampling. One source of speedup in our work and in Wang et al. [23] comes from approximating inner products in iterations in the robust tensor power method (see below). To estimate 〈u, v〉 for n-dimensional vectors u and v, their work computes sketches S(u) and S(v) and approximates 〈u, v〉 ≈ 〈S(u), S(v)〉. Instead, if one has u, one can sample coordinates i proportional to u2i , which is known as `2-sampling [14, 8]. One estimates 〈u, v〉 as vi‖u‖ 2 2 ui , which is unbiased and has variance O(‖u‖22‖v‖22). These guarantees are similar to those using sketching, though the constants are significantly smaller (see below), and unlike sketching, one does not need to read the entire tensor to perform such sampling. Symmetric Tensors: As in [23], we focus on orthogonal tensor decomposition of symmetric tensors, though we explain the extension to the asymmetric case below. Symmetric tensors arise in engineering applications, for example, to represent the symmetric tensor field of stress, strain, and anisotropic conductivity. Another example is diffusion MRI in which one uses symmetric tensors to describe diffusion in the brain or other parts of the body. In spectral methods symmetric tensors are exactly those that come up in Latent Dirichlet Allocation problems. Although one can symmetrize a tensor using simple matrix operations (see, e.g., [1]), we cannot do this in sublinear time. In orthogonal tensor decompostion of a symmetric matrix, there is an underlying n× n · · ·n tensor T∗ = ∑k i=1 λiv ⊗p i , and the input tensor is T = T ∗+E, where ‖E ‖2 ≤ . We have λ1 > λ2 > · · · > λk > 0 and that {vi}ki=1 is a set of orthonormal vectors. The goal is to reconstruct approximations v̂i to the vectors vi, and approximations λ̂i to the λi. Our results naturally generalize to tensors with different lengths in different dimensions. For simplicity, we first focus on order p = 3. In the robust tensor power method [1], one generates a random initial vector u, and performs T update steps û = T(I, u, u)/‖T(I, u, u)‖2, where T(I, u, u) = [ n∑ j=1 n∑ `=1 T1,j,` uju`, n∑ j=1 n∑ `=1 T2,j,` uju`, · · · , n∑ j=1 n∑ `=1 Tn,j,` uju` ] . The matrices T1,∗,∗, . . . ,Tn,∗,∗ are referred to as the slices. The vector û typically converges to the top eigenvector in a small number of iterations, and one often chooses a small number L of random initial vectors to boost confidence. Successive eigenvectors can be found by deflation. The algorithm and analysis immediately extend to higher order tensors. We use `2-sampling to estimate T(I, u, u). To achieve the same guarantees as in [23], for typical settings of parameters (constant k and several eigenvalue assumptions) naïvely one needs to take O(n2) `2-samples from u for each slice in each iteration, resulting in Ω(n3) time and destroying our sublinearity. We observe that if we additionally knew the squared norms ‖T1,∗,∗ ‖2F , . . . , ‖Tn,∗,∗ ‖2F , then we could take O(n2) `2-samples in total, where we take ‖Ti,∗,∗ ‖2F ‖T ‖2F ·O(n2) `2-samples from the i-th slice in expectation. Perhaps in some applications such norms are known or cheap to compute in a single pass, but without further assumptions, how can one obtain such norms in sublinear time? If T is a symmetric tensor, then Tj,j,j = ∑k i=1 λiv 3 i,j + Ej,j,j . Note that if there were no noise, then we could read off approximations to the slice norms, since ‖Tj,∗,∗ ‖2F = ∑k i=1 λ 2 i v 2 i,j , and so T 2/3 j,j,j is an approximation to ‖Tj,∗,∗ ‖2F up to factors depending on k and the eigenvalues. However, there is indeed noise. To obtain non-trivial guarantees, the robust tensor power method assumes ‖E ‖2 = O(1/n), where ‖E ‖2 = sup ‖u‖2=‖v‖2=‖w‖2=1 E(u, v, w) = sup ‖u‖2=‖v‖2=‖w‖2=1 n∑ i=1 n∑ j=1 n∑ k=1 Ei,j,k uivjwk, which in particular implies |Ej,j,j | = O(1/n). This assumption comes from the Θ(1/ √ n)correlation of the random initial vector to v1. This noise bound does not trivialize the problem; indeed, Ej,j,j can be chosen adversarially subject to |Ej,j,j | = O(1/n), and if the vi were random unit vectors and the λi and k were constant, then ∑k i=1 λiv 3 i,j = O(1/n 3/2), which is small enough to be completely masked by the noise Ej,j,j . Nevertheless, there is a lot of information about the slice norms. Indeed, suppose k = 1, λ1 = Θ(1), and ‖T ‖F = 1. Then Tj,j,j = Θ(v31,j) + Ej,j,j , and one can show ‖Tj,∗,∗ ‖2F = λ21v21,j ± O(1/n). Again using that |Ej,j,j | = O(1/n), this implies ‖Tj,∗,∗ ‖2F = ω(n−2/3) if and only if Tj,j,j = ω(1/n), and therefore one would notice this by reading Tj,j,j . There can only be o(n2/3) slices j for which ‖Tj,∗,∗ ‖2F = ω(n−2/3), since ‖T ‖2F = 1. Therefore, for each of them we can afford to take O(n2) `2-samples and still have an O(n2+2/3) = o(n3) sublinear running time. The remaining slices all have ‖Tj,∗,∗ ‖2F = O(n−2/3), and therefore if we also take O(n1/3) `2-samples from every slice, we will also estimate the contribution to T(I, u, u) from these slices well. This is also a sublinear O(n2+1/3) number of samples. While the previous paragraph illustrates the idea for k = 1, for k = 2 we need to read more than the Tj,j,j entries to decide how many `2-samples to take from a slice. The analysis is more complicated because of sign cancellations. Even for k = 2 we could have Tj,j,j = λ1v31,j + λ2v 3 2,j + Ej,j,j , and if v1,j = −v2,j then we may not detect that ‖Tj,∗,∗ ‖2F is large. We fix this by also reading the entries Ti,j,j ,Tj,i,j , and Tj,j,i for every i and j. This is still only O(n2) entries and so we are still sublinear time. Without additional assumptions, we only give a formal analysis of this for k ∈ {1, 2}. More importantly, if instead of third-order symmetric tensors we consider p-th order symmetric tensors for even p, we do not have such sign cancellations. In this case we do not have any restrictions on k for estimating slice norms. One does need to show after deflation, the slice norms can still be estimated; this holds because the eigenvectors and eigenvalues are estimated sufficiently well. We also give several per-iteration optimizations of our algorithm, based on careful implementations of generating a sorted list of random numbers and random permutations. We find empirically (see below) that we are much faster per iteration than previous sketching algorithms, in addition to not having to read the entire input tensor in a preprocessing step. Asymmetric Tensors: For asymmetric tensors, e.g., 3rd-order tensors of the form ∑k i=1 λiui⊗ vi⊗ wi, it is impossible to achieve sublinear time in general, since it is hard to distinguish T = ei⊗ej⊗ek for random i, j, k ∈ {1, 2, . . . , n} from T = 0⊗3. We make a necessary and sufficient assumption that all the entries of the ui are less than n−γ for an arbitrarily small constant γ > 0. In this case, all slice norms are o(n−γ) and by taking O(n2−γ) samples from each slice we achieve sublinear time. We can also apply such an assumption to symmetric tensors. Empirical Results: One of the main strengths of our work is our empirical results. In each iteration we approximate T(I, u, u) a total of B times independently and take the median to increase our confidence. In the notation of [23], B corresponds to the number of independent sketches used. While the median works empirically, there are some theoretical issues with it discussed in Remark 4. Also let b be the total number of `2-samples we take per iteration, which corresponds to the sketch size in the notation of [23]. We found that empirically we can set B and b to be much smaller than that in [23] and achieve the same error guarantees. One explanation for this is that the variance bound we obtain via importance sampling is a factor of 43 = 64 smaller than in [23], and for p-th order tensors, a factor of 4p smaller. To give an idea of how much smaller we can set b andB, to achieve roughly the same squared residual norm error on the synthetic data sets of dimension 1200 for finding a good rank-1 approximation, the algorithm of [23] would need to set parameters b = 216 and B = 50, whereas we can set b = 10× 1200 and B = 5. Our running time is 2.595 seconds and we have no preprocessing time, whereas the algorithm of [23] has a running time of 116.3 seconds and 55.34 seconds of preprocessing time. We refer the reader to Table 1 in Section 3. In total we are over 50 times faster. We also demonstrate our algorithm in a real-world application using real datasets, even when the datasets are sparse. Namely, we consider a spectral algorithm for Latent Dirichlet Allocation [1, 2] which uses tensor decomposition as its core computational step. We show a significant speedup can be achieved on tensors occurring in applications such as LDA, and we refer the reader to Table 2 in Section 3. For example, on the wiki [23] dataset with a tensor dimension of 200, we run more than 5 times faster than the sketching-based method. Previous Sampling Algorithms: Previous sampling-based schemes of [17, 4] do not achieve our guarantees, because [17] uses uniform sampling, which does not work for tensors with spiky elements, while the non-uniform sampling in [4] requires touching all of the entries in the tensor and making two passes over it. Notation Let [n] denote {1, 2, . . . , n}. Let ⊗ denote the outer product, and u⊗3 = u ⊗ u ⊗ u. Let T ∈ Rnp , where p is the order of tensor T and n is the dimension of tensor T. Let 〈A,B〉 denote the entry-wise inner product between two tensors A,B ∈ Rnp , e.g., 〈A,B〉 = ∑n i1=1 ∑n i2=1 · · · ∑n ip=1 Ai1,i2,··· ,ip ·Bi1,i2,··· ,ip . For a tensor A ∈ Rn p , ‖A ‖F = ( ∑n i1=1 ∑n i2=1 · · · ∑n ip=1 A2i1,··· ,ip) 1 2 . For random variable X let E[X] denote its expectation of X and V[X] its variance (if these quantities exist). 2 Main Results We explain the details of our main results in this section. First, we state the importance sampling lemmas for our tensor application. Second, we explain how to quickly produce a list of random tuples according to a certain distribution needed by our algorithm. Third, we combine the first and the second parts to get a fast way of approximating tensor contractions, which are used as subroutines in each iteration of the robust tensor power method. We then provide our main theoretical results, and how to estimate the slice norms needed by our main algorithm. Importance sampling lemmas. Approximating an inner product is a simple application of importance sampling. Tensor contraction T(u, v, w) can be regarded as the inner product between two n3-dimensional vectors, and thus importance sampling can be applied. Lemma 1 suggests that we can take a few samples according to their importance, e.g., we can sample Ti,j,k uivjwk with probability |uivjwk|2/‖u‖22‖v‖22‖w‖22. As long as the number of samples is large enough, it will approximate the true tensor contraction ∑ i ∑ j ∑ kTi,j,k uivjwk with small variance after a final rescaling. Lemma 1. Suppose random variable X = Ti,j,k uivjwk/(piqjrk) with probability piqjrk where pi = |ui|2/‖u‖22, qj = |vj |2/‖v‖22, and rk = |wk|2/‖w‖22, and we take L i.i.d. samples of X , denoted X1, X2, · · · , XL. Let Y = 1L ∑L `=1X`. Then (1) E[Y ] = 〈T, u ⊗ v ⊗ w〉, and (2) V[Y ] ≤ 1L‖T ‖ 2 F · ‖u⊗ v ⊗ w‖2F . Similarly, we also have importance sampling for each slice Ti,∗,∗, i.e., “face” of T. Lemma 2. For all i ∈ [n], suppose random variable Xi = Ti,j,k vjwk/(qjrk) with probability qjrk, where qj = |vj |2/‖v‖22 and rk = |wk|2/‖w‖22, and we take Li i.i.d. samples of Xi, say Xi1, X i 2, · · · , XiLi . Let Y i = 1Li ∑L `=1X i ` . Then (1) E[Y i] = 〈Ti,∗,∗, v ⊗ w〉 and (2) V[Y i] ≤ 1 Li ‖Ti,∗,∗ ‖2F ‖v ⊗ w‖2F . Generating importance samples in linear time. We need an efficient way to sample indices of a vector based on their importance. We view this problem as follows: imagine [0, 1] is divided into z “bins” with different lengths corresponding to the probability of selecting each bin, where z is the number of indices in a probability vector. We generate m random numbers uniformly from [0, 1] and see which bin each random number belongs to. If a random number is in bin i, we sample the i-th index of a vector. There are known algorithms [6, 19] to solve this problem in O(z +m) time. We give an alternative algorithm GENRANDTUPLES. Our algorithm combines Bentley and Saxe’s algorithm [3] for efficiently generating m sorted random numbers in O(m) time, and Knuth’s shuffling algorithm [12] for generating a random permutation of [m] in O(m) time. We use the notation CUMPROB(v, w) and CUMPROB(u, v, w) for the algorithm creating the distributions on Rn2 and Rn3 of Lemma 2 and Lemma 1, respectively. We note that naïvely applying previous algorithms would require z = O(n2) and z = O(n3) time to form these two distributions, but we can take O(m) samples from them implicitly in O(n+m) time. Fast approximate tensor contractions. We propose a fast way to approximately compute tensor contractions T(I, v, w) and T(u, v, w) with a sublinear number of samples of T, as shown in Alogrithm 1 and Algorithm 2. Naïvely computing tensor contractions using all of the entries of T gives an exact answer but could take n3 time. Also, to keep our algorithm sublinear time, we never explicitly compute the deflated tensor; rather we represent it implicitly and sample from it. Algorithm 1 Subroutine for approximate tensor contraction T(I, v, w) 1: function APPROXTIVW(T, v, w, n,B, {b̂i}) 2: q̃, r̃ ← CUMPROB(v, w) 3: for d = 1→ B do 4: L ← GENRANDTUPLES( ∑n i=1 b̂i, q̃, r̃) 5: for i = 1→ n do 6: s(d)i ← 0 7: for ` = 1→ b̂i do 8: (j, k)← L(i−1)b+` 9: s(d)i ← s (d) i + 1 qjrk Ti,j,k ·uj · uk 10: T̂(I, v, w)i ← median d∈[B] s (d) i /b̂i, ∀i ∈ [n] 11: return T̂(I, v, w) Algorithm 2 Subroutine for approximate tensor contraction T(u, v, w) 1: function APPROXTUVW(T, u, v, w, n,B, b̂) 2: p̃, q̃, r̃ ← CUMPROB(u, v, w) 3: for d = 1→ B do 4: L ← GENRANDTUPLES(̂b, p̃, q̃, r̃). 5: s(d) ← 0 6: for (i, j, k) ∈ L do 7: s(d) ← s(d) + 1piqjrk Ti,j,k ·ui · uj · uk 8: s(d) ← s(d)/b̂ 9: T̂(u, v, w)← median d∈[B] s(d) 10: return T̂(u, v, w) The following theorem gives the error bounds of APPROXTIVW and APPROXTUVW (in Algorithm 1 and 2). Let b̂i be the number samples we take from slice i ∈ [n] in APPROXTIVW, and let b̂ denote the total number of samples in our algorithm. Theorem 3. For T ∈ Rn×n×n and u ∈ Rn with ‖u‖2 = 1, define the number ε1,T(u) = T̂(u, u, u) − T(u, u, u) and the vector ε2,T(u) = T̂(I, u, u) − T(I, u, u). For any b > 0, if b̂i & b‖Ti,∗,∗ ‖2F /‖T ‖2F then the following bounds hold 1: E[|ε1,T(u)|2] = O(‖T ‖2F /b), and E[‖ε2,T(u)‖22] = O(n‖T ‖2F /b). In addition, for any fixed ω ∈ Rn with ‖ω‖2 = 1, E[〈ω, ε2,T (u)〉2] = O(‖T ‖2F /b). (1) Eq. (1) can be obtained by observing that each random variable [ε2,T(u)]i is independent and so V[〈ω, ε2,T(u)〉] = ∑n i=1 ω 2 i ‖Ti,∗,∗ ‖2F b̂i . ( ∑n i=1 ω 2 i ) ‖T ‖2F b = ‖T ‖2F b . Remark 4. In [23], the coordinate-wise median of B estimates to the T(I, v, w) is used to boost the success probability. There appears to be a gap [21] in their argument as it is unclear how to achieve (1) after taking a coordinate-wise median, which is (7) in Theorem 1 of [23]. To fix this, we instead pay a factor proportional to the number of iterations in Algorithm 3 in the sample complexity b̂. Since we have expectation bounds on the quantities in Theorem 3, we can apply a Markov bound and a union bound across all iterations. This suffices for our main theorem concerning sublinear time below. One can obtain high probability bounds by running Algorithm 3 multiple times independently, and taking coordinate-wise medians of the output eigenvectors. Empirically, our algorithm works even if we take the median in each iteration, which is done in line 10 in Algorithm 1. Replacing Theorem 1 in [23] by our Theorem 3, the rest of the analysis in [23] is unchanged. Our Algorithm 3 is the same as the sketching-based robust tensor power method in [23], except for lines 10, 12, 15, and 17, where the sketching-based approximate tensor contraction is replaced by our importance sampling procedures APPROXTUVW and APPROXTIVW. Rather than use Theorem 2 of Wang et al. [23], the main theorem concerning the correctness of the robust tensor decomposition algorithm, we use a recent improvement of it by Wang and Anandkumar in Theorems 4.1 and 4.2 of [22], which states general guarantees for any algorithm satisfying per iteration noise guarantees. These theorems also remove many of the earlier eigenvalue assumptions in Theorem 2 of [23]. Theorem 5. (Theorem 4.1 and 4.2 of [22]), Suppose T = T∗+E, where T = ∑k i=1 λiv ⊗3 i with λi > 0 and orthonormal basis vectors {v1, . . . , vk} ⊆ Rn, n ≥ k. Let λmax, λmin be the largest and smallest values in {λi}ki=1 and {λ̂i, v̂i}ki=1 be outputs of the robust tensor power method. There exist absolute constants K0, C0, C1, C2, C3 > 0 such that if E satisfies ‖E(I, u(τ)t , u (τ) t )‖2 ≤ , |E(vi, u (τ) t , u (τ) t )| ≤ min{ / √ k,C0λmin/n}, (2) 1For two functions f, g, we use the shorthand f . g (resp. &) to indicate that f ≤ Cg (resp. ≥) for some absolute constant C. Algorithm 3 Our main algorithm 1: function IMPORTANCESAMPLINGRB(T, n,B, b) 2: if si are known, where ‖Ti,∗,∗ ‖2F . si then 3: b̂i ← b · si/‖T ‖2F ,∀i ∈ [n] 4: else 5: b̂i ← b/n, ∀i ∈ [n] 6: b̂ = ∑n i=1 b̂i 7: for ` = 1→ L do 8: u(`) ←INITIALIZATION 9: for t = 1→ T do 10: u(`) ← APPROXTIVW(T, u(`), u(`), n,B, {b̂i}) 11: u(`) ← u(`)/‖u(`)‖2 12: λ(`) ← APPROXTUVW(T, u(`), u(`), u(`), n,B, b̂) 13: `∗ ← arg max`∈[L] λ(`), u∗ ← u(` ∗) 14: for t = 1→ T do 15: u∗ ← APPROXTIVW(T, u∗, u∗, n,B, {b̂i}) 16: u∗ ← u∗/‖u∗‖2 17: λ∗ ← APPROXTUVW(T, u∗, u∗, u∗, n,B, b̂) 18: return λ∗, u∗ for all i ∈ [k], t ∈ [T ], and τ ∈ [L] and furthermore ≤ C1 · λmin/ √ k, T = Ω(log(λmaxn/ )), L ≥ max{K0, k} log(max{K0, k}), then with probability at least 9/10, there exists a permutation π : [k]→ [k] such that |λi − λ̂π(i)| ≤ C2 , ‖vi − v̂π(i)‖2 ≤ C3 /λi, ∀i = 1, · · · , k. Combining the previous theorem with our importance sampling analysis, we obtain: Theorem 6 (Main). Assume the notation of Theorem 5. For each j ∈ [k], suppose we take b̂(j) =∑n i=1 b̂ (j) i samples during the power iterations for recovering λ̂j and v̂j , the number of samples for slice i is b̂(j)i & bkT‖[T− ∑j−1 l=1 λ̂lv̂ ⊗3 l ]i,∗,∗‖2F /‖T− ∑j−1 l=1 λ̂lv̂ ⊗3 l ‖2F where b & n‖T ‖2F / 2 + ‖T ‖2F /min{ / √ k, λmin/n}2. Then the output guarantees of Theorem 5 hold for Algorithm 3 with constant probability. Our total time is O(LTk2b̂) and the space is O(nk), where b̂ = maxj∈[k] b̂(j). In Theorem 3, if we require b̂i = b‖Ti,∗,∗ ‖2F /‖T ‖2F , we need to scan the entire tensor to compute ‖Ti,∗,∗ ‖2F , making our algorithm not sublinear. With the following mild assumption in Theorem 7, our algorithm is sublinear when sampling uniformly (̂bi = b/n) without computing ‖Ti,∗,∗ ‖2F : Theorem 7 (Bounded slice norm). There is a constant α > 0, a constant β ∈ (0, 1] and a sufficiently small constant γ > 0, such that, for any 3rd order tensor T = T∗+E ∈ Rn3 with rank(T∗) ≤ nγ , λk ≥ 1/nγ , if ‖Ti,∗,∗ ‖2F ≤ 1nβ ‖T ‖ 2 F for all i ∈ [n], and E satisfies (2), then Algorithm 3 runs in O(n3−α) time. The condition β ∈ (0, 1] is a practical one. When β = 1, all tensor slices have equal Frobenius norm. The case β = 0 only occurs when ‖Ti,∗,∗ ‖F = ‖T ‖F ; i.e., all except one slice is zero. This theorem can also be applied to asymmetric tensors, since the analysis in [23] can be extended to them. For certain cases, we can remove the bounded slice norm assumption. The idea is to take a sublinear number of samples from the tensor to obtain upper bounds on all slice norms. In the full version, we extend the algorithm and analysis of the robust tensor power method to p > 3 by replacing contractions T(u, v, w) and T(I, v, w) with T(u1, u2, · · · , up) and T(I, u2, · · · , up). As outlined in Section 1, when p is even, because we do not have sign cancellations we can show: Theorem 8 (Even order). There is a constant α > 0 and a sufficiently small constant γ > 0, such that, for any even order-p tensor T = T∗+E ∈ Rnp with rank(T∗) ≤ nγ , p ≤ nγ and λk ≥ 1/nγ . For any sufficiently large constant c0, there exists a sufficiently small constant c > 0, for any ∈ (0, cλk/(c0p2kn(p−2)/2)) if E satisfies ‖E ‖2 ≤ /(c0 √ n), Algorithm 3 runs in O(np−α) time. As outlined in Section 1, for p = 3 and small k we can take sign considerations into account: Theorem 9 (Low rank). There is a constant α > 0 and a sufficiently small constant γ > 0 such that for any symmetric tensor T = T∗+E ∈ Rn3 with E satisfying (2), rank(T∗) ≤ 2, and λk ≥ 1/nγ , then Algorithm 3 runs in O(n3−α) time. 3 Experiments 3.1 Experiment Setup and Datasets Our implementation shares the same code base 1 as the sketching-based robust tensor power method proposed in [23]. We ran our experiments on an i7-5820k CPU with 64 GB of memory in singlethreaded mode. We ran two versions of our algorithm: the version with pre-scanning scans the full tensor to accurately measure per-slice Frobenius norms and make samples for each slice in proportion to its Frobenius norm in APPROXTIVW; the version without pre-scanning assumes that the Frobenius norm of each slice is bounded by 1nα ‖T ‖ 2 F , α ∈ (0, 1] and uses b/n samples per slice, where b is the total number of samples our algorithm makes, analogous to the sketch length b in [23]. Synthetic datasets. We first generated an orthonormal basis {vi}ki=1 and then computed the synthetic tensor as T∗ = ∑k i=1 λiv ⊗3 i , with λ1 ≥ · · · ≥ λk. Then we normalized T ∗ such that ‖T∗ ‖F = 1, and added a symmetric Gaussian noise tensor E where Eijl ∼ N (0, σn1.5 ) for i ≤ j ≤ l. Then σ controls the noise-to-signal ratio and we kept it as 0.01 in all our synthetic tensors. For the eigenvalues λi, we generated three different decays: inverse decay λi = 1i , inverse square decay λi = 1 i2 , and linear decay λi = 1− i−1 k . We also set k = 100 when generating tensors, since higher rank eigenvalues were almost indistinguishable from the added noise. To show the scalability of our algorithm, we generated tensors with different dimensions: n = 200, 400, 600, 800, 1000, 1200. Real-life datasets. Latent Dirichlet Allocation [5] (LDA) is a powerful generative statistical model for topic modeling. A spectral method has been proposed to solve LDA models [1, 2] and the most critical step in spectral LDA is to decompose a symmetric K × K × K tensor with orthogonal eigenvectors, where K is the number of modeled topics. We followed the steps in [1, 18] and built a K ×K ×K tensor TLDA for each dataset, and then ran our algorithms directly on TLDA to see how it works on those tensors in real applications. In our experiments we keep K = 200. We used the two same datasets as the previous work [23]: Wiki and Enron, as well as four additional real-life datasets. We refer the reader to our GitHub repository 2 for our code and full results. 3.2 Results We considered running time and the squared residual norm to evaluate the performance of our algorithms. Given a tensor T ∈ Rn3 , let ‖T− ∑k i=1 λiui ⊗ vi ⊗ wi‖2F denote the squared residual norm where {(λ1, u1, v1, w1), · · · , (λk, uk, vk, wk)} are the eigenvalue/eigenvectors obtained by the robust power method. To reduce the experiment time we looked only for the first eigenvalue and eigenvector, but our algorithm is capable of finding any number of eigenvalues/eigenvectors. We list the pre-scanning time as preprocessing time in tables. It only depends on the tensor dimension n and unlike the sketching based method, it does not depend on b. Pre-scanning time is very short, because it only requires one pass of sequential access to the tensor which is very efficient on hardware. Sublinear time verification. Our theoretical result suggests the total number of samples bno-prescan for our algorithm without pre-scanning is n1−α(α ∈ (0, 1]) times larger than bprescan for our algorithm with pre-scanning. But in experiments we observe that when bno-prescan = bprescan both algorithms achieve very similar accuracy, indicating that in practice α ≈ 1. Synthetic datasets. We ran our algorithm on a large number of synthetic tensors with different dimensions and different eigengaps. Table 1 shows results for a tensor with 1200 dimensions with 100 non-zero eigenvalues decaying as λi = 1i2 . To reach roughly the same residual norm, the running time of our algorithm is over 50 times faster than that of the sketching-based robust tensor power method, thanks to the fact that we usually need a relatively small B and b to get a good residual, and the hidden constant factor in the running time of sampling is much smaller than that of sketching. Our algorithm scales well on large tensors due to its sub-linear nature. In Figure 1(a), for the sketching-based method we kept b = 216, B = 30 for n ≤ 800 and B = 50 for n > 800 (larger n requires more sketches to observe a reasonable recovery). For our algorithm, we chose b and B such 1http://yining-wang.com/fftlda-code.zip 2https://github.com/huanzhang12/sampling_tensor_decomp/ that for each n, our residual norm is on-par or better than the sketching-based method. Our algorithm needs much less time than the sketching-based one over all dimensions. Another advantage of our algorithm is that there are zero or very minimal preprocessing steps. In Figure 1(b), we can see how the preprocessing time grows to prepare sketches when the dimension increases. For applications where only the first few eigenvectors are needed, the preprocessing time could be a large overhead. Real-life datasets Due to the small tensor dimension (200), our algorithm shows less speedup than the sketching-based method. But it is still 2 ∼ 6 times faster in each of the six real-life datasets, achieving the same squared residual norm. Table 2 reports results for one of the datasets in many different settings of (b, B). Like in synthetic datasets, we also empirically observe that the constant b in importance sampling is much smaller than the b used in sketching to get the same error guarantee.
1. What is the focus and contribution of the paper on tensor decomposition? 2. What are the strengths of the proposed approach, particularly in terms of computational efficiency? 3. Do you have any concerns or typos regarding the paper's content? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor errors or typos in the paper that the reviewer noticed?
Review
Review The authors present a novel strategy to efficiently compute the CANDECOMP/PARAFAC tensor decomposition. Their contribution is to provide a means to perform the necessary computation of the inner products in sublinear time with respect to the number of elements of the tensor to be decomposed making use of importance sampling. The importance sampling is in turn shown to be possible in linear time for a specified number of samples. The authors additionally show that the resulting estimator is unbiased and state its variance The runtime of the method is finally compared to a previously established algorithm that is linear in the number of non-zero elements of the tensor and found to be indeed significantly lower.In my opinion, the overall quality of the paper is very high. The context and relevance as well as the contribution itself are clearly defined and thoroughly explained/proven. The experiments are reasonable and a comparision to the state of the art is provided. The results nicely confirm the theoretical findings. The only thing that one might miss is a brief summary or discussion at the very end. Please find below a list of further remarks. Languagewise, I just noticed a few minor errors, like in the first paragraph of the introduction ("A popular decomposition method is [the] canonical polyadic decomposition...") or in the last paragraph of the introduction ("from a theory standpoint[s]..."). In the function GENSORTEDRANDN, I believe the loops should run from 1 to m and not from 1 to L. Otherwise the stated complexity would not be correct. I however assume this to be only a typo. It should be stated in Figure 1 that the times are (I assume) given in seconds, like it is done in the tables. On page 6, there is a reference to Algorithm 5 when clearly Algorithm 4 is meant. You refer to Lemma 1 and 2 of reference 19, but there they are called Theorem 1 and 2. Furthermore, it seems that actually you only want to replace Theorem 1 and not 2, as Theorem 2 is in fact the main results that you claim carries over to the case of importance sampling.
NIPS
Title Sublinear Time Orthogonal Tensor Decomposition Abstract A recent work (Wang et. al., NIPS 2015) gives the fastest known algorithms for orthogonal tensor decomposition with provable guarantees. Their algorithm is based on computing sketches of the input tensor, which requires reading the entire input. We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i.e., even without reading most of the input tensor. Instead of using sketches to estimate inner products in tensor decomposition algorithms, we use importance sampling. To achieve sublinear time, we need to know the norms of tensor slices, and we show how to do this in a number of important cases. For symmetric tensors T = ∑k i=1 λiu ⊗p i with λi > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below ‖T ‖F then sublinear time is again possible. One of the main strengths of our work is empirical in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy. N/A ∑k i=1 λiu ⊗p i with λi > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below ‖T ‖F then sublinear time is again possible. One of the main strengths of our work is empirical - in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy. 1 Introduction Tensors are a powerful tool for dealing with multi-modal and multi-relational data. In recommendation systems, often using more than two attributes can lead to better recommendations. This could occur, for example, in Groupon where one could look at users, activities, and time (season, time of day, weekday/weekend, etc.), as three attributes to base predictions on (see [13] for a discussion). Similar to low rank matrix approximation, we seek a tensor decomposition to succinctly store the tensor and to apply it quickly. A popular decomposition method is the canonical polyadic decomposition, i.e., the CANDECOMP/PARAFAC (CP) decomposition, where the tensor is decomposed into a sum of rank-1 components [9]. We refer the reader to [23], where applications of CP including data mining, computational neuroscience, and statistical learning for latent variable models are mentioned. A natural question, given the emergence of large data sets, is whether such decompositions can be performed quickly. There are a number of works on this topic [17, 16, 7, 11, 10, 4, 20]. Most related to ours are several recent works of Wang et al. [23] and Tung et al. [18], in which it is shown how to significantly speed up this decomposition for orthogonal tensor decomposition using the randomized technique of linear sketching [15]. In this work we also focus on orthogonal tensor decomposition. The idea in [23] is to create a succinct sketch of the input tensor, from which one can then perform implicit tensor decomposition by approximating inner products in existing decomposition methods. Existing methods, like the power method, involve computing the inner product of a vector, which is now a rank-1 matrix, with another vector, which is now a slice of a tensor. Such inner products can ∗Full version appears on arXiv, 2017. ‡Work done while visiting IBM Almaden. †Supported by XDATA DARPA Air Force Research Laboratory contract FA8750-12-C-0323. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. be approximated much faster by instead computing the inner product of the sketched vectors, which have significantly lower dimension. One can also replace the sketching with sampling to approximate inner products; we discuss some sampling schemes [17, 4] below and compare them to our work. 1.1 Our Contributions We show in a number of important cases, one can achieve the same theoretical guarantees in the work of Wang et al. [23] (which was applied later by Tung et al. [18]), in sublinear time, that is, without reading most of the input tensor. While previous work needs to walk through the input at least once to create a sketch, we show one can instead perform importance sampling of the tensor based on the current iterate, together with reading a few entries of the tensor which help us learn the norms of tensor slices. We use a version of `2-sampling for our importance sampling. One source of speedup in our work and in Wang et al. [23] comes from approximating inner products in iterations in the robust tensor power method (see below). To estimate 〈u, v〉 for n-dimensional vectors u and v, their work computes sketches S(u) and S(v) and approximates 〈u, v〉 ≈ 〈S(u), S(v)〉. Instead, if one has u, one can sample coordinates i proportional to u2i , which is known as `2-sampling [14, 8]. One estimates 〈u, v〉 as vi‖u‖ 2 2 ui , which is unbiased and has variance O(‖u‖22‖v‖22). These guarantees are similar to those using sketching, though the constants are significantly smaller (see below), and unlike sketching, one does not need to read the entire tensor to perform such sampling. Symmetric Tensors: As in [23], we focus on orthogonal tensor decomposition of symmetric tensors, though we explain the extension to the asymmetric case below. Symmetric tensors arise in engineering applications, for example, to represent the symmetric tensor field of stress, strain, and anisotropic conductivity. Another example is diffusion MRI in which one uses symmetric tensors to describe diffusion in the brain or other parts of the body. In spectral methods symmetric tensors are exactly those that come up in Latent Dirichlet Allocation problems. Although one can symmetrize a tensor using simple matrix operations (see, e.g., [1]), we cannot do this in sublinear time. In orthogonal tensor decompostion of a symmetric matrix, there is an underlying n× n · · ·n tensor T∗ = ∑k i=1 λiv ⊗p i , and the input tensor is T = T ∗+E, where ‖E ‖2 ≤ . We have λ1 > λ2 > · · · > λk > 0 and that {vi}ki=1 is a set of orthonormal vectors. The goal is to reconstruct approximations v̂i to the vectors vi, and approximations λ̂i to the λi. Our results naturally generalize to tensors with different lengths in different dimensions. For simplicity, we first focus on order p = 3. In the robust tensor power method [1], one generates a random initial vector u, and performs T update steps û = T(I, u, u)/‖T(I, u, u)‖2, where T(I, u, u) = [ n∑ j=1 n∑ `=1 T1,j,` uju`, n∑ j=1 n∑ `=1 T2,j,` uju`, · · · , n∑ j=1 n∑ `=1 Tn,j,` uju` ] . The matrices T1,∗,∗, . . . ,Tn,∗,∗ are referred to as the slices. The vector û typically converges to the top eigenvector in a small number of iterations, and one often chooses a small number L of random initial vectors to boost confidence. Successive eigenvectors can be found by deflation. The algorithm and analysis immediately extend to higher order tensors. We use `2-sampling to estimate T(I, u, u). To achieve the same guarantees as in [23], for typical settings of parameters (constant k and several eigenvalue assumptions) naïvely one needs to take O(n2) `2-samples from u for each slice in each iteration, resulting in Ω(n3) time and destroying our sublinearity. We observe that if we additionally knew the squared norms ‖T1,∗,∗ ‖2F , . . . , ‖Tn,∗,∗ ‖2F , then we could take O(n2) `2-samples in total, where we take ‖Ti,∗,∗ ‖2F ‖T ‖2F ·O(n2) `2-samples from the i-th slice in expectation. Perhaps in some applications such norms are known or cheap to compute in a single pass, but without further assumptions, how can one obtain such norms in sublinear time? If T is a symmetric tensor, then Tj,j,j = ∑k i=1 λiv 3 i,j + Ej,j,j . Note that if there were no noise, then we could read off approximations to the slice norms, since ‖Tj,∗,∗ ‖2F = ∑k i=1 λ 2 i v 2 i,j , and so T 2/3 j,j,j is an approximation to ‖Tj,∗,∗ ‖2F up to factors depending on k and the eigenvalues. However, there is indeed noise. To obtain non-trivial guarantees, the robust tensor power method assumes ‖E ‖2 = O(1/n), where ‖E ‖2 = sup ‖u‖2=‖v‖2=‖w‖2=1 E(u, v, w) = sup ‖u‖2=‖v‖2=‖w‖2=1 n∑ i=1 n∑ j=1 n∑ k=1 Ei,j,k uivjwk, which in particular implies |Ej,j,j | = O(1/n). This assumption comes from the Θ(1/ √ n)correlation of the random initial vector to v1. This noise bound does not trivialize the problem; indeed, Ej,j,j can be chosen adversarially subject to |Ej,j,j | = O(1/n), and if the vi were random unit vectors and the λi and k were constant, then ∑k i=1 λiv 3 i,j = O(1/n 3/2), which is small enough to be completely masked by the noise Ej,j,j . Nevertheless, there is a lot of information about the slice norms. Indeed, suppose k = 1, λ1 = Θ(1), and ‖T ‖F = 1. Then Tj,j,j = Θ(v31,j) + Ej,j,j , and one can show ‖Tj,∗,∗ ‖2F = λ21v21,j ± O(1/n). Again using that |Ej,j,j | = O(1/n), this implies ‖Tj,∗,∗ ‖2F = ω(n−2/3) if and only if Tj,j,j = ω(1/n), and therefore one would notice this by reading Tj,j,j . There can only be o(n2/3) slices j for which ‖Tj,∗,∗ ‖2F = ω(n−2/3), since ‖T ‖2F = 1. Therefore, for each of them we can afford to take O(n2) `2-samples and still have an O(n2+2/3) = o(n3) sublinear running time. The remaining slices all have ‖Tj,∗,∗ ‖2F = O(n−2/3), and therefore if we also take O(n1/3) `2-samples from every slice, we will also estimate the contribution to T(I, u, u) from these slices well. This is also a sublinear O(n2+1/3) number of samples. While the previous paragraph illustrates the idea for k = 1, for k = 2 we need to read more than the Tj,j,j entries to decide how many `2-samples to take from a slice. The analysis is more complicated because of sign cancellations. Even for k = 2 we could have Tj,j,j = λ1v31,j + λ2v 3 2,j + Ej,j,j , and if v1,j = −v2,j then we may not detect that ‖Tj,∗,∗ ‖2F is large. We fix this by also reading the entries Ti,j,j ,Tj,i,j , and Tj,j,i for every i and j. This is still only O(n2) entries and so we are still sublinear time. Without additional assumptions, we only give a formal analysis of this for k ∈ {1, 2}. More importantly, if instead of third-order symmetric tensors we consider p-th order symmetric tensors for even p, we do not have such sign cancellations. In this case we do not have any restrictions on k for estimating slice norms. One does need to show after deflation, the slice norms can still be estimated; this holds because the eigenvectors and eigenvalues are estimated sufficiently well. We also give several per-iteration optimizations of our algorithm, based on careful implementations of generating a sorted list of random numbers and random permutations. We find empirically (see below) that we are much faster per iteration than previous sketching algorithms, in addition to not having to read the entire input tensor in a preprocessing step. Asymmetric Tensors: For asymmetric tensors, e.g., 3rd-order tensors of the form ∑k i=1 λiui⊗ vi⊗ wi, it is impossible to achieve sublinear time in general, since it is hard to distinguish T = ei⊗ej⊗ek for random i, j, k ∈ {1, 2, . . . , n} from T = 0⊗3. We make a necessary and sufficient assumption that all the entries of the ui are less than n−γ for an arbitrarily small constant γ > 0. In this case, all slice norms are o(n−γ) and by taking O(n2−γ) samples from each slice we achieve sublinear time. We can also apply such an assumption to symmetric tensors. Empirical Results: One of the main strengths of our work is our empirical results. In each iteration we approximate T(I, u, u) a total of B times independently and take the median to increase our confidence. In the notation of [23], B corresponds to the number of independent sketches used. While the median works empirically, there are some theoretical issues with it discussed in Remark 4. Also let b be the total number of `2-samples we take per iteration, which corresponds to the sketch size in the notation of [23]. We found that empirically we can set B and b to be much smaller than that in [23] and achieve the same error guarantees. One explanation for this is that the variance bound we obtain via importance sampling is a factor of 43 = 64 smaller than in [23], and for p-th order tensors, a factor of 4p smaller. To give an idea of how much smaller we can set b andB, to achieve roughly the same squared residual norm error on the synthetic data sets of dimension 1200 for finding a good rank-1 approximation, the algorithm of [23] would need to set parameters b = 216 and B = 50, whereas we can set b = 10× 1200 and B = 5. Our running time is 2.595 seconds and we have no preprocessing time, whereas the algorithm of [23] has a running time of 116.3 seconds and 55.34 seconds of preprocessing time. We refer the reader to Table 1 in Section 3. In total we are over 50 times faster. We also demonstrate our algorithm in a real-world application using real datasets, even when the datasets are sparse. Namely, we consider a spectral algorithm for Latent Dirichlet Allocation [1, 2] which uses tensor decomposition as its core computational step. We show a significant speedup can be achieved on tensors occurring in applications such as LDA, and we refer the reader to Table 2 in Section 3. For example, on the wiki [23] dataset with a tensor dimension of 200, we run more than 5 times faster than the sketching-based method. Previous Sampling Algorithms: Previous sampling-based schemes of [17, 4] do not achieve our guarantees, because [17] uses uniform sampling, which does not work for tensors with spiky elements, while the non-uniform sampling in [4] requires touching all of the entries in the tensor and making two passes over it. Notation Let [n] denote {1, 2, . . . , n}. Let ⊗ denote the outer product, and u⊗3 = u ⊗ u ⊗ u. Let T ∈ Rnp , where p is the order of tensor T and n is the dimension of tensor T. Let 〈A,B〉 denote the entry-wise inner product between two tensors A,B ∈ Rnp , e.g., 〈A,B〉 = ∑n i1=1 ∑n i2=1 · · · ∑n ip=1 Ai1,i2,··· ,ip ·Bi1,i2,··· ,ip . For a tensor A ∈ Rn p , ‖A ‖F = ( ∑n i1=1 ∑n i2=1 · · · ∑n ip=1 A2i1,··· ,ip) 1 2 . For random variable X let E[X] denote its expectation of X and V[X] its variance (if these quantities exist). 2 Main Results We explain the details of our main results in this section. First, we state the importance sampling lemmas for our tensor application. Second, we explain how to quickly produce a list of random tuples according to a certain distribution needed by our algorithm. Third, we combine the first and the second parts to get a fast way of approximating tensor contractions, which are used as subroutines in each iteration of the robust tensor power method. We then provide our main theoretical results, and how to estimate the slice norms needed by our main algorithm. Importance sampling lemmas. Approximating an inner product is a simple application of importance sampling. Tensor contraction T(u, v, w) can be regarded as the inner product between two n3-dimensional vectors, and thus importance sampling can be applied. Lemma 1 suggests that we can take a few samples according to their importance, e.g., we can sample Ti,j,k uivjwk with probability |uivjwk|2/‖u‖22‖v‖22‖w‖22. As long as the number of samples is large enough, it will approximate the true tensor contraction ∑ i ∑ j ∑ kTi,j,k uivjwk with small variance after a final rescaling. Lemma 1. Suppose random variable X = Ti,j,k uivjwk/(piqjrk) with probability piqjrk where pi = |ui|2/‖u‖22, qj = |vj |2/‖v‖22, and rk = |wk|2/‖w‖22, and we take L i.i.d. samples of X , denoted X1, X2, · · · , XL. Let Y = 1L ∑L `=1X`. Then (1) E[Y ] = 〈T, u ⊗ v ⊗ w〉, and (2) V[Y ] ≤ 1L‖T ‖ 2 F · ‖u⊗ v ⊗ w‖2F . Similarly, we also have importance sampling for each slice Ti,∗,∗, i.e., “face” of T. Lemma 2. For all i ∈ [n], suppose random variable Xi = Ti,j,k vjwk/(qjrk) with probability qjrk, where qj = |vj |2/‖v‖22 and rk = |wk|2/‖w‖22, and we take Li i.i.d. samples of Xi, say Xi1, X i 2, · · · , XiLi . Let Y i = 1Li ∑L `=1X i ` . Then (1) E[Y i] = 〈Ti,∗,∗, v ⊗ w〉 and (2) V[Y i] ≤ 1 Li ‖Ti,∗,∗ ‖2F ‖v ⊗ w‖2F . Generating importance samples in linear time. We need an efficient way to sample indices of a vector based on their importance. We view this problem as follows: imagine [0, 1] is divided into z “bins” with different lengths corresponding to the probability of selecting each bin, where z is the number of indices in a probability vector. We generate m random numbers uniformly from [0, 1] and see which bin each random number belongs to. If a random number is in bin i, we sample the i-th index of a vector. There are known algorithms [6, 19] to solve this problem in O(z +m) time. We give an alternative algorithm GENRANDTUPLES. Our algorithm combines Bentley and Saxe’s algorithm [3] for efficiently generating m sorted random numbers in O(m) time, and Knuth’s shuffling algorithm [12] for generating a random permutation of [m] in O(m) time. We use the notation CUMPROB(v, w) and CUMPROB(u, v, w) for the algorithm creating the distributions on Rn2 and Rn3 of Lemma 2 and Lemma 1, respectively. We note that naïvely applying previous algorithms would require z = O(n2) and z = O(n3) time to form these two distributions, but we can take O(m) samples from them implicitly in O(n+m) time. Fast approximate tensor contractions. We propose a fast way to approximately compute tensor contractions T(I, v, w) and T(u, v, w) with a sublinear number of samples of T, as shown in Alogrithm 1 and Algorithm 2. Naïvely computing tensor contractions using all of the entries of T gives an exact answer but could take n3 time. Also, to keep our algorithm sublinear time, we never explicitly compute the deflated tensor; rather we represent it implicitly and sample from it. Algorithm 1 Subroutine for approximate tensor contraction T(I, v, w) 1: function APPROXTIVW(T, v, w, n,B, {b̂i}) 2: q̃, r̃ ← CUMPROB(v, w) 3: for d = 1→ B do 4: L ← GENRANDTUPLES( ∑n i=1 b̂i, q̃, r̃) 5: for i = 1→ n do 6: s(d)i ← 0 7: for ` = 1→ b̂i do 8: (j, k)← L(i−1)b+` 9: s(d)i ← s (d) i + 1 qjrk Ti,j,k ·uj · uk 10: T̂(I, v, w)i ← median d∈[B] s (d) i /b̂i, ∀i ∈ [n] 11: return T̂(I, v, w) Algorithm 2 Subroutine for approximate tensor contraction T(u, v, w) 1: function APPROXTUVW(T, u, v, w, n,B, b̂) 2: p̃, q̃, r̃ ← CUMPROB(u, v, w) 3: for d = 1→ B do 4: L ← GENRANDTUPLES(̂b, p̃, q̃, r̃). 5: s(d) ← 0 6: for (i, j, k) ∈ L do 7: s(d) ← s(d) + 1piqjrk Ti,j,k ·ui · uj · uk 8: s(d) ← s(d)/b̂ 9: T̂(u, v, w)← median d∈[B] s(d) 10: return T̂(u, v, w) The following theorem gives the error bounds of APPROXTIVW and APPROXTUVW (in Algorithm 1 and 2). Let b̂i be the number samples we take from slice i ∈ [n] in APPROXTIVW, and let b̂ denote the total number of samples in our algorithm. Theorem 3. For T ∈ Rn×n×n and u ∈ Rn with ‖u‖2 = 1, define the number ε1,T(u) = T̂(u, u, u) − T(u, u, u) and the vector ε2,T(u) = T̂(I, u, u) − T(I, u, u). For any b > 0, if b̂i & b‖Ti,∗,∗ ‖2F /‖T ‖2F then the following bounds hold 1: E[|ε1,T(u)|2] = O(‖T ‖2F /b), and E[‖ε2,T(u)‖22] = O(n‖T ‖2F /b). In addition, for any fixed ω ∈ Rn with ‖ω‖2 = 1, E[〈ω, ε2,T (u)〉2] = O(‖T ‖2F /b). (1) Eq. (1) can be obtained by observing that each random variable [ε2,T(u)]i is independent and so V[〈ω, ε2,T(u)〉] = ∑n i=1 ω 2 i ‖Ti,∗,∗ ‖2F b̂i . ( ∑n i=1 ω 2 i ) ‖T ‖2F b = ‖T ‖2F b . Remark 4. In [23], the coordinate-wise median of B estimates to the T(I, v, w) is used to boost the success probability. There appears to be a gap [21] in their argument as it is unclear how to achieve (1) after taking a coordinate-wise median, which is (7) in Theorem 1 of [23]. To fix this, we instead pay a factor proportional to the number of iterations in Algorithm 3 in the sample complexity b̂. Since we have expectation bounds on the quantities in Theorem 3, we can apply a Markov bound and a union bound across all iterations. This suffices for our main theorem concerning sublinear time below. One can obtain high probability bounds by running Algorithm 3 multiple times independently, and taking coordinate-wise medians of the output eigenvectors. Empirically, our algorithm works even if we take the median in each iteration, which is done in line 10 in Algorithm 1. Replacing Theorem 1 in [23] by our Theorem 3, the rest of the analysis in [23] is unchanged. Our Algorithm 3 is the same as the sketching-based robust tensor power method in [23], except for lines 10, 12, 15, and 17, where the sketching-based approximate tensor contraction is replaced by our importance sampling procedures APPROXTUVW and APPROXTIVW. Rather than use Theorem 2 of Wang et al. [23], the main theorem concerning the correctness of the robust tensor decomposition algorithm, we use a recent improvement of it by Wang and Anandkumar in Theorems 4.1 and 4.2 of [22], which states general guarantees for any algorithm satisfying per iteration noise guarantees. These theorems also remove many of the earlier eigenvalue assumptions in Theorem 2 of [23]. Theorem 5. (Theorem 4.1 and 4.2 of [22]), Suppose T = T∗+E, where T = ∑k i=1 λiv ⊗3 i with λi > 0 and orthonormal basis vectors {v1, . . . , vk} ⊆ Rn, n ≥ k. Let λmax, λmin be the largest and smallest values in {λi}ki=1 and {λ̂i, v̂i}ki=1 be outputs of the robust tensor power method. There exist absolute constants K0, C0, C1, C2, C3 > 0 such that if E satisfies ‖E(I, u(τ)t , u (τ) t )‖2 ≤ , |E(vi, u (τ) t , u (τ) t )| ≤ min{ / √ k,C0λmin/n}, (2) 1For two functions f, g, we use the shorthand f . g (resp. &) to indicate that f ≤ Cg (resp. ≥) for some absolute constant C. Algorithm 3 Our main algorithm 1: function IMPORTANCESAMPLINGRB(T, n,B, b) 2: if si are known, where ‖Ti,∗,∗ ‖2F . si then 3: b̂i ← b · si/‖T ‖2F ,∀i ∈ [n] 4: else 5: b̂i ← b/n, ∀i ∈ [n] 6: b̂ = ∑n i=1 b̂i 7: for ` = 1→ L do 8: u(`) ←INITIALIZATION 9: for t = 1→ T do 10: u(`) ← APPROXTIVW(T, u(`), u(`), n,B, {b̂i}) 11: u(`) ← u(`)/‖u(`)‖2 12: λ(`) ← APPROXTUVW(T, u(`), u(`), u(`), n,B, b̂) 13: `∗ ← arg max`∈[L] λ(`), u∗ ← u(` ∗) 14: for t = 1→ T do 15: u∗ ← APPROXTIVW(T, u∗, u∗, n,B, {b̂i}) 16: u∗ ← u∗/‖u∗‖2 17: λ∗ ← APPROXTUVW(T, u∗, u∗, u∗, n,B, b̂) 18: return λ∗, u∗ for all i ∈ [k], t ∈ [T ], and τ ∈ [L] and furthermore ≤ C1 · λmin/ √ k, T = Ω(log(λmaxn/ )), L ≥ max{K0, k} log(max{K0, k}), then with probability at least 9/10, there exists a permutation π : [k]→ [k] such that |λi − λ̂π(i)| ≤ C2 , ‖vi − v̂π(i)‖2 ≤ C3 /λi, ∀i = 1, · · · , k. Combining the previous theorem with our importance sampling analysis, we obtain: Theorem 6 (Main). Assume the notation of Theorem 5. For each j ∈ [k], suppose we take b̂(j) =∑n i=1 b̂ (j) i samples during the power iterations for recovering λ̂j and v̂j , the number of samples for slice i is b̂(j)i & bkT‖[T− ∑j−1 l=1 λ̂lv̂ ⊗3 l ]i,∗,∗‖2F /‖T− ∑j−1 l=1 λ̂lv̂ ⊗3 l ‖2F where b & n‖T ‖2F / 2 + ‖T ‖2F /min{ / √ k, λmin/n}2. Then the output guarantees of Theorem 5 hold for Algorithm 3 with constant probability. Our total time is O(LTk2b̂) and the space is O(nk), where b̂ = maxj∈[k] b̂(j). In Theorem 3, if we require b̂i = b‖Ti,∗,∗ ‖2F /‖T ‖2F , we need to scan the entire tensor to compute ‖Ti,∗,∗ ‖2F , making our algorithm not sublinear. With the following mild assumption in Theorem 7, our algorithm is sublinear when sampling uniformly (̂bi = b/n) without computing ‖Ti,∗,∗ ‖2F : Theorem 7 (Bounded slice norm). There is a constant α > 0, a constant β ∈ (0, 1] and a sufficiently small constant γ > 0, such that, for any 3rd order tensor T = T∗+E ∈ Rn3 with rank(T∗) ≤ nγ , λk ≥ 1/nγ , if ‖Ti,∗,∗ ‖2F ≤ 1nβ ‖T ‖ 2 F for all i ∈ [n], and E satisfies (2), then Algorithm 3 runs in O(n3−α) time. The condition β ∈ (0, 1] is a practical one. When β = 1, all tensor slices have equal Frobenius norm. The case β = 0 only occurs when ‖Ti,∗,∗ ‖F = ‖T ‖F ; i.e., all except one slice is zero. This theorem can also be applied to asymmetric tensors, since the analysis in [23] can be extended to them. For certain cases, we can remove the bounded slice norm assumption. The idea is to take a sublinear number of samples from the tensor to obtain upper bounds on all slice norms. In the full version, we extend the algorithm and analysis of the robust tensor power method to p > 3 by replacing contractions T(u, v, w) and T(I, v, w) with T(u1, u2, · · · , up) and T(I, u2, · · · , up). As outlined in Section 1, when p is even, because we do not have sign cancellations we can show: Theorem 8 (Even order). There is a constant α > 0 and a sufficiently small constant γ > 0, such that, for any even order-p tensor T = T∗+E ∈ Rnp with rank(T∗) ≤ nγ , p ≤ nγ and λk ≥ 1/nγ . For any sufficiently large constant c0, there exists a sufficiently small constant c > 0, for any ∈ (0, cλk/(c0p2kn(p−2)/2)) if E satisfies ‖E ‖2 ≤ /(c0 √ n), Algorithm 3 runs in O(np−α) time. As outlined in Section 1, for p = 3 and small k we can take sign considerations into account: Theorem 9 (Low rank). There is a constant α > 0 and a sufficiently small constant γ > 0 such that for any symmetric tensor T = T∗+E ∈ Rn3 with E satisfying (2), rank(T∗) ≤ 2, and λk ≥ 1/nγ , then Algorithm 3 runs in O(n3−α) time. 3 Experiments 3.1 Experiment Setup and Datasets Our implementation shares the same code base 1 as the sketching-based robust tensor power method proposed in [23]. We ran our experiments on an i7-5820k CPU with 64 GB of memory in singlethreaded mode. We ran two versions of our algorithm: the version with pre-scanning scans the full tensor to accurately measure per-slice Frobenius norms and make samples for each slice in proportion to its Frobenius norm in APPROXTIVW; the version without pre-scanning assumes that the Frobenius norm of each slice is bounded by 1nα ‖T ‖ 2 F , α ∈ (0, 1] and uses b/n samples per slice, where b is the total number of samples our algorithm makes, analogous to the sketch length b in [23]. Synthetic datasets. We first generated an orthonormal basis {vi}ki=1 and then computed the synthetic tensor as T∗ = ∑k i=1 λiv ⊗3 i , with λ1 ≥ · · · ≥ λk. Then we normalized T ∗ such that ‖T∗ ‖F = 1, and added a symmetric Gaussian noise tensor E where Eijl ∼ N (0, σn1.5 ) for i ≤ j ≤ l. Then σ controls the noise-to-signal ratio and we kept it as 0.01 in all our synthetic tensors. For the eigenvalues λi, we generated three different decays: inverse decay λi = 1i , inverse square decay λi = 1 i2 , and linear decay λi = 1− i−1 k . We also set k = 100 when generating tensors, since higher rank eigenvalues were almost indistinguishable from the added noise. To show the scalability of our algorithm, we generated tensors with different dimensions: n = 200, 400, 600, 800, 1000, 1200. Real-life datasets. Latent Dirichlet Allocation [5] (LDA) is a powerful generative statistical model for topic modeling. A spectral method has been proposed to solve LDA models [1, 2] and the most critical step in spectral LDA is to decompose a symmetric K × K × K tensor with orthogonal eigenvectors, where K is the number of modeled topics. We followed the steps in [1, 18] and built a K ×K ×K tensor TLDA for each dataset, and then ran our algorithms directly on TLDA to see how it works on those tensors in real applications. In our experiments we keep K = 200. We used the two same datasets as the previous work [23]: Wiki and Enron, as well as four additional real-life datasets. We refer the reader to our GitHub repository 2 for our code and full results. 3.2 Results We considered running time and the squared residual norm to evaluate the performance of our algorithms. Given a tensor T ∈ Rn3 , let ‖T− ∑k i=1 λiui ⊗ vi ⊗ wi‖2F denote the squared residual norm where {(λ1, u1, v1, w1), · · · , (λk, uk, vk, wk)} are the eigenvalue/eigenvectors obtained by the robust power method. To reduce the experiment time we looked only for the first eigenvalue and eigenvector, but our algorithm is capable of finding any number of eigenvalues/eigenvectors. We list the pre-scanning time as preprocessing time in tables. It only depends on the tensor dimension n and unlike the sketching based method, it does not depend on b. Pre-scanning time is very short, because it only requires one pass of sequential access to the tensor which is very efficient on hardware. Sublinear time verification. Our theoretical result suggests the total number of samples bno-prescan for our algorithm without pre-scanning is n1−α(α ∈ (0, 1]) times larger than bprescan for our algorithm with pre-scanning. But in experiments we observe that when bno-prescan = bprescan both algorithms achieve very similar accuracy, indicating that in practice α ≈ 1. Synthetic datasets. We ran our algorithm on a large number of synthetic tensors with different dimensions and different eigengaps. Table 1 shows results for a tensor with 1200 dimensions with 100 non-zero eigenvalues decaying as λi = 1i2 . To reach roughly the same residual norm, the running time of our algorithm is over 50 times faster than that of the sketching-based robust tensor power method, thanks to the fact that we usually need a relatively small B and b to get a good residual, and the hidden constant factor in the running time of sampling is much smaller than that of sketching. Our algorithm scales well on large tensors due to its sub-linear nature. In Figure 1(a), for the sketching-based method we kept b = 216, B = 30 for n ≤ 800 and B = 50 for n > 800 (larger n requires more sketches to observe a reasonable recovery). For our algorithm, we chose b and B such 1http://yining-wang.com/fftlda-code.zip 2https://github.com/huanzhang12/sampling_tensor_decomp/ that for each n, our residual norm is on-par or better than the sketching-based method. Our algorithm needs much less time than the sketching-based one over all dimensions. Another advantage of our algorithm is that there are zero or very minimal preprocessing steps. In Figure 1(b), we can see how the preprocessing time grows to prepare sketches when the dimension increases. For applications where only the first few eigenvectors are needed, the preprocessing time could be a large overhead. Real-life datasets Due to the small tensor dimension (200), our algorithm shows less speedup than the sketching-based method. But it is still 2 ∼ 6 times faster in each of the six real-life datasets, achieving the same squared residual norm. Table 2 reports results for one of the datasets in many different settings of (b, B). Like in synthetic datasets, we also empirically observe that the constant b in importance sampling is much smaller than the b used in sketching to get the same error guarantee.
1. What is the main contribution of the paper regarding symmetric, orthogonal tensor decomposition? 2. How does the proposed method compare to prior works in terms of running time and computational complexity? 3. Do you have any concerns or suggestions regarding the analysis and theoretical guarantees provided in the paper? 4. How does the paper's experimental results support its claims, particularly for large synthetic data sets? 5. Are there any aspects of the presentation that can be improved, such as clarifying the meaning of certain quantities or providing a clearer distinction between the main message of the paper and supplementary materials?
Review
Review This paper presents a randomized method for decomposing a symmetric, orthogonal tensor that is “nearly low rank”. Specifically, the tensor should expressible as a low-rank tensor + an arbitrary noise tensor with bounded norm. The fact that the tensor is symmetric and composed of orthogonal components may sound restrictive, but this is exactly the sort of tensor decomposition problem that comes up in spectral methods that solve Latent Dirichlet Allocation problems. So, it’s an interesting model to study. The method introduced very closely follows work in a paper from NIPS 2015, “Fast and guaranteed tensor decomposition via sketching” which uses a standard tensor power method for decomposition, but speeds up each iteration by accelerating the required tensor contractions (vector dot products against the tensor) via randomized sketching techniques. This paper uses the same algorithm, but instead of using sketches based on “random projections”, which take random linear combinations of all of the components of the tensor, it uses a subsampling technique to approximate the contractions. The authors main selling point is that this approach should allow for better running times since they don’t have to touch every entry in the tensor with each iteration. Sampling also avoids and expensive preprocessing cost incurred by the sketching algorithm. Without knowing the prior work, it’s a bit tough to understand how the runtime of this algorithm compares. In the introduction it’s stated that the sampling algorithm is slower per iteration than the sketching methods, which are accelerated with sparse hashing and fast Fourier methods. However, runtimes are stated in terms of the sketching dimension b which is not the same for both algorithms. It can vary by orders of magnitude in the experiments. It’s stated that the algorithm has a faster runtime overall, I believe because of preprocessing costs in the sketching algorithm, but I would like to see these costs laid out so that the overall runtime comparison is clear. Regardless, the algorithm performs well experimentally, providing a (3x-6x) speedup for small datasets and up to a 50x speedup for large synthetic data sets. This is especially impressive since the sampling algorithm is much simpler than the prior work, whose implementation seemingly requires many optimizations and detailed parameter setting.This paper is decently written and the experimental results look promising. However, I don’t think the result is quite strong enough that I would recommend acceptance for NIPS. In general, the paper leans heavily on the work in “Fast and guaranteed tensor decomposition via sketching”. The idea to use sampling is nice, but the analysis is straightforward since the sampling scheme plugs into the prior work black-box. The authors do not prove any tighter theoretical guarantees and it’s not clear if their methods give any provable runtime improvements over the sketching methods. I would encourage the authors to clarify this aspect of their paper. More generally, while it’s nice to have theoretical guarantees, I worry that the bound in Lemma 10 is too complex/loose to provide indication of the algorithm’s quality. In addition to a very strong requirement on the noise eps, it has an inherent n^3 dependence, a stable rank dependence, a polynomial dependence on the failure probability, and spectral gap and condition number dependencies that I don’t expect to be small. It would be great to see if the analysis of these sketching techniques can be pushed beyond the work in “Fast and guaranteed tensor decomposition via sketching” to obtain a more interpretable/convincing bound. In light of it’s demonstrated practical performance, perhaps sampling can be analyzed in a tighter way? As a more specific point, I don’t understand the meaning of r(λ)=max(λ_i/λ_j). Wouldn’t this simply equal λ_1/λ_k? As far as presentation goes, I have a couple of comments: I think it’s a bit confusing to claim that the algorithm runs in “sublinear time”: it seems that the final runtime in Theorem 10 has a total O(n^3) dependence since b depends on n^2. I understand that that in practice the running time *can* be sublinear because the required number of samples is much lower than predicted by the theory, but I think this should be made more clear in the abstract and introduction. For a shortened form of the paper (like the NIPS submission) I would prefer to see the sampling analysis (currently in the supplemental material) in the main text, in lieu of the discussion of how to very efficiently generate importance samples. While I appreciate the careful work on this aspect of the algorithm (the actually sampling process is often overlooked by other papers!) ultimately it’s focusing on smallish log factors and is somewhat orthogonal to the main message of the paper. It could be relegated to an appendix.
NIPS
Title Reward is enough for convex MDPs Abstract Maximising a cumulative reward function that is Markov and stationary, i.e., defined over state-action pairs and independent of time, is sufficient to capture many kinds of goals in a Markov decision process (MDP). However, not all goals can be captured in this manner. In this paper we study convex MDPs in which goals are expressed as convex functions of the stationary distribution and show that they cannot be formulated using stationary reward functions. Convex MDPs generalize the standard reinforcement learning (RL) problem formulation to a larger framework that includes many supervised and unsupervised RL problems, such as apprenticeship learning, constrained MDPs, and so-called ‘pure exploration’. Our approach is to reformulate the convex MDP problem as a min-max game involving policy and cost (negative reward) ‘players’, using Fenchel duality. We propose a meta-algorithm for solving this problem and show that it unifies many existing algorithms in the literature. 1 Introduction In reinforcement learning (RL), an agent learns how to map situations to actions so as to maximize a cumulative scalar reward signal. The learner is not told which actions to take, but instead must discover which actions lead to the most reward [64]. Mathematically, the RL problem can be written as finding a policy whose state occupancy has the largest inner product with a reward vector [55], i.e., the goal of the agent is to solve RL: max dπ∈K ∑ s,a r(s, a)dπ(s, a), (1) where dπ is the state-action stationary distribution induced by policy π and K is the set of admissible stationary distributions (see Definition 1). A significant body of work is dedicated to solving the RL problem efficiently in challenging domains [45, 62]. However, not all decision making problems of interest take this form. In particular we consider the more general convex MDP problem, Convex MDP: min dπ∈K f(dπ), (2) where f : K → R is a convex function. Sequential decision making problems that take this form include Apprenticeship Learning (AL), pure exploration, and constrained MDPs, among others; see Table 1. In this paper we prove the following claim: We can solve Eq. (2) by using any algorithm that solves Eq. (1) as a subroutine. In other words, any algorithm that solves the standard RL problem can be used to solve the more general convex MDP problem. More specifically, we make the following contributions. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Firstly, we adapt the meta-algorithm of Abernethy and Wang [3] for solving Eq. (2). The key idea is to use Fenchel duality to convert the convex MDP problem into a two-player zero-sum game between the agent (henceforth, policy player) and an adversary that produces rewards (henceforth, cost player) that the agent must maximize [3, 6]. From the agent’s point of view, the game is bilinear, and so for fixed rewards produced by the adversary the problem reduces to the standard RL problem with non-stationary reward (Fig. 1). Secondly, we propose a sample efficient policy player that uses a standard RL algorithm (eg, [35, 60]), and computes an optimistic policy with respect to the non-stationary reward at each iteration. In other words, we use algorithms that were developed to achieve low regret in the standard RL setup, to achieve low regret as policy players in the min-max game we formulate to solve the convex MDP. Our main result is that the average of the policies produced by the policy player converges to a solution to the convex MDP problem (Eq. (2)). Inspired by this principle, we also propose a recipe for using deep-RL (DRL) agents to solve convex MDPs heuristically: provide the agent non-stationary rewards from the cost player. We explore this principle in our experiments. Finally, we show that choosing specific algorithms for the policy and cost players unifies several disparate branches of RL problems, such as apprenticeship learning, constrained MDPs, and pure exploration into a single framework, as we summarize in Table 1. 2 Reinforcement Learning Preliminaries In RL an agent interacts with an environment over a number of time steps and seeks to maximize its cumulative reward. We consider two cases, the average reward case and the discounted case. The Markov decision process (MDP) is defined by the tuple (S,A, P,R) for the average reward case and by the tuple (S,A, P,R, γ, d0) for the discounted case. We assume an infinite horizon, finite state-action problem where initially, the state of the agent is sampled according to s0 ∼ d0, then at each time t the agent is in state st ∈ S, selects action at ∈ A according to some policy π(st, ·), receives reward rt ∼ R(st, at) and transitions to new state st+1 ∈ S according to the probability distribution P (·, st, at). The two performance metrics we consider are given by Javgπ = lim T→∞ 1 T E T∑ t=1 rt, J γ π = (1− γ)E ∞∑ t=1 γtrt, (3) for the average reward case and discounted case respectively. The goal of the agent is to find a policy that maximizes Javgπ or J γ π . Any stationary policy π induces a state-action occupancy measure dπ, which measures how often the agent visits each state-action when following π. Let Pπ(st = ·) be the probability measure over states at time t under policy π, then davgπ (s, a) = lim T→∞ 1 T E T∑ t=1 Pπ(st = s)π(s, a), dγπ(s, a) = (1− γ)E ∞∑ t=1 γtPπ(st = s)π(s, a), for the average reward case and the discounted case respectively. With these, we can rewrite the RL objective in Eq. (3) in terms of the occupancy measure using the following well-known result, which for completeness we prove in Appendix B. Proposition 1. For both the average and the discounted case, the agent objective function Eq. (3) can be written in terms of the occupancy measure as Jπ = ∑ s,a r(s, a)dπ(s, a). Given an occupancy measure it is possible to recover the policy by setting π(s, a) = dπ(s, a)/ ∑ a dπ(s, a) if ∑ a dπ(s, a) > 0, and π(s, a) = 1/|A| otherwise. Accordingly, in this paper we shall formulate the RL problem using the state-action occupancy measure, and both the standard RL problem (Eq. (1)) and the convex MDP problem (Eq. (2)) are convex optimization problems in variable dπ. For the purposes of this manuscript we do not make a distinction between the average and discounted settings, other than through the convex polytopes of feasible occupancy measures, which we define next. Definition 1 (State-action occupancy’s polytope [55]). For the average reward case the set of admissible state-action occupancies is Kavg = {dπ | dπ ≥ 0, ∑ s,a dπ(s, a) = 1, ∑ a dπ(s, a) = ∑ s′,a′ P (s, s′, a′)dπ(s ′, a′) ∀s ∈ S}, and for the discounted case it is given by Kγ = {dπ | dπ ≥ 0, ∑ a dπ(s, a) = (1− γ)d0(s) + γ ∑ s′,a′ P (s, s′, a′)dπ(s ′, a′) ∀s ∈ S}. We note that being a polytope implies that K is a convex and compact set. The convex MDP problem is defined for the tuple (S,A, P, f) in the average cost case and (S,A, P, f, γ, d0) in the discounted case. This tuple is defining a state-action occupancy’s polytope K (Definition 1), and the problem is to find a policy π whose state occupancy dπ is in this polytope and minimizes the function f (Eq. (2)). 3 A Meta-Algorithm for Solving Convex MDPs via RL To solve the convex MDP problem (Eq. (2)) we need to find an occupancy measure dπ (and associated policy) that minimizes the function f . Since both f : K → R and the set K are convex this is a convex optimization problem. However, it is a challenging one due to the nature of learning about the environment through stochastic interactions. In this section we show how to reformulate the convex MDP problem (Eq. (2)) so that standard RL algorithms can be used to solve it, allowing us to harness decades of work on solving vanilla RL problems. To do that we will need the following definition. Definition 2 (Fenchel conjugate). For a function f : Rn → R ∪ {−∞,∞}, its Fenchel conjugate is denoted f∗ : Rn → R ∪ {−∞,∞} and defined as f∗(x) := supy x · y − f(y). Remark 1. The Fenchel conjugate function f∗ is always convex (when it exists) even if f is not. Furthermore, the biconjugate f∗∗ := (f∗)∗ equals f if and only if f is convex and lower semicontinuous. Using this we can rewrite the convex MDP problem (Eq. (2)) as fOPT = min dπ∈K f(dπ) = min dπ∈K max λ∈Λ (λ · dπ − f∗(λ)) = max λ∈Λ min dπ∈K (λ · dπ − f∗(λ)) (4) where Λ is the closure of (sub-)gradient space {∂f(dπ)|dπ ∈ K}, which is a convex set [3, Theorem 4]. As both sets are convex, this is a convex-concave saddle-point problem and a zero-sum two-player game [54, 49], and we were able to swap the order of minimization and maximization using the minimax theorem [71]. With this we define the Lagrangian as L(dπ, λ) := λ · dπ − f∗(λ). For a fixed λ ∈ Λ, minimizing the Lagrangian is a standard RL problem of the form of Eq. (1), i.e., equivalent to maximizing a reward r = −λ. Thus, one might hope that by producing an optimal dual variable λ? we could simply solve d?π = argmindπ∈K L(·, λ ?) for the optimal occupancy measure. However, the next lemma states that this is not possible in general. Lemma 1. There exists an MDP M and convex function f for which there is no stationary reward r ∈ RS×A such that arg maxdπ∈K dπ · r = arg mindπ∈K f(dπ). To see this note that for any reward r there is a deterministic policy that optimizes the reward [55], but for some choices of f no deterministic policy is optimal, eg, when f is the negative entropy function. This result tells us that even if we have access to an optimal dual-variable we cannot simply use it to recover the stationary distribution that solves the convex MDP problem in general. To overcome this issue we develop an algorithm that generates a sequence of policies {πk}k∈N such that the average converges to an optimal policy for Eq. (2), i.e., (1/K) ∑K k=1 d k π → d?π ∈ arg mindπ∈K f(dπ). The algorithm we develop is described in Algorithm 1 and is adapted from the meta-algorithm described in Abernethy and Wang [3]. It is referred to as a meta-algorithm since it relies on supplied sub-routine algorithms Algπ and Algλ. The reinforcement learning algorithm Algπ takes as input a reward vector and returns a state-action occupancy measure dπ. The cost algorithm Algλ can be a more general function of the entire history. We discuss concrete examples of Algπ and Algλ in Section 4. Algorithm 1: meta-algorithm for convex MDPs 1: Input: convex-concave payoff L : K × Λ→ R, algorithms Algλ,Algπ , K ∈ N 2: for k = 1, . . . ,K do 3: λk = Algλ(d 1 π, . . . , d k−1 π ;L) 4: dkπ = Algπ(−λk) 5: end for 6: Return d̄Kπ = 1 K ∑K k=1 d k π, λ̄ K = 1K ∑K k=1 λ k In order to analyze this algorithm we will need a small detour into online convex optimization (OCO). In OCO, a learner is presented with a sequence of K convex loss functions `1, `2, . . . , `K : K → R and at each round k must select a point xk ∈ K after which it suffers a loss of `k(xk). At time period k the learner is assumed to have perfect knowledge of the loss functions `1, . . . , `k−1. The learner wants to minimize its average regret, defined as R̄K := 1 K ( K∑ k=1 `k(xk)−min x∈K K∑ k=1 `k(x) ) . In the context of convex reinforcement learning and meta-algorithm 1, the loss functions for the cost player are `kλ = −L(·, λk), and for the policy player are `kπ = L(dkπ, ·), with associated average regrets R̄πK and R̄ λ K . This brings us to the following theorem. Theorem 1 (Theorem 2, [3]). Assume that Algπ and Algλ have guaranteed average regret bounded as R̄πK ≤ K and R̄λK ≤ δK , respectively. Then Algorithm 1 outputs d̄Kπ and λ̄K satisfying mindπ∈K L(dπ, λ̄K) ≥ fOPT − K − δK and maxλ∈Λ L(d̄Kπ , λ) ≤ fOPT + K + δK . This theorem tells us that so long as the RL algorithm we employ has guaranteed low-regret, and assuming we choose a reasonable low-regret algorithm for deciding the costs, then the meta-algorithm will produce a solution to the convex MDP problem (Eq. (2)) to any desired tolerance, this is because fOPT ≤ f(d̄Kπ ) = maxλ L(d̄Kπ , λ) ≤ fOPT + K +δK . For example, we shall later present algorithms that have regret bounded as K = δK ≤ O(1/ √ K), in which case we have f(d̄Kπ )− fOPT ≤ O(1/ √ K). (5) Non-Convex f . Remark 1 implies that the game maxλ∈Λ mindπ∈K (λ · dπ − f∗(λ)) is concaveconvex for any function f , so we can solve it with Algorithm 1, even for a non-convex f . From weak duality the value of the Lagrangian on the output of Algorithm 1, L(d̄π, λ̄), is a lower bound on the optimal solution fOPT. In addition, since f(dπ) is always an upper bound on fOPT we have both an upper bound and a lower bound on the optimal value: L(d̄π, λ̄) ≤ fOPT ≤ f(d̄π). 4 Policy and Cost Players for Convex MDPs In this section we present several algorithms for the policy and cost players that can be used in Algorithm 1. Any combination of these algorithms is valid and will come with different practical and theoretical performance. In Section 6 we show that several well known methods in the literature correspond to particular choices of cost and policy players and so fall under our framework. In addition, in this section we assume that λmax = max λ∈Λ max s,a |λ(s, a)| <∞, which holds when the set Λ is compact. One way to guarantee that Λ is compact is to consider functions f with Lipschitz continuous gradients (which implies bounded gradients since the set K is compact). For simplicity, we further assume that λmax ≤ 1. By making this assumption we assure that the non stationary rewards produced by the cost player are bounded by 1 as is usually done in RL. 4.1 Cost Player Follow the Leader (FTL) is a classic OCO algorithm that selects λk to be the best point in hindsight. In the special case of convex MDPs, as defined in Eq. (4), FTL has a simpler form: λk = arg max λ∈Λ ∑k−1 j=1 L(djπ, λ) = arg max λ∈Λ ( λ · ∑k−1 j=1 djπ −Kf∗(λ) ) = ∇f(d̄k−1π ), (6) where d̄k−1π = ∑k−1 j=1 d j π and the last equality follows from the fact that (∇f∗)−1 = ∇f [56]. The average regret of FTL is guaranteed to be R̄K ≤ c/ √ K under some assumptions [29]. In some cases, and specifically when the set K is a polytope and the function f is strongly convex, FTL can enjoy logarithmic or even constant regret; see [32, 29] for more details. Online Mirror Descent (OMD) uses the following update [47, 9]: λk = arg max λ∈Λ ( (λ− λk−1) · ∇λL(dk−1π , λk−1) + αkBr(λ, λk−1) ) , where αk is a learning rate and Br is a Bregman divergence [14]. For Br(x) = 0.5||x||22, we get online gradient descent [79] and for Br(x) = x · log(x) we get multiplicative weights [23] as special cases. We also note that OMD is equivalent to a linearized version of Follow the Regularized Leader (FTRL) [43, 28]. The average regret of OMD is R̄K ≤ c/ √ K under some assumptions, see, for example [28]. 4.2 Policy Players 4.2.1 Best Response In OCO, the best response is to simply ignore the history and play the best option on the current round, which has guaranteed average regret bound of R̄K ≤ 0 (this requires knowledge of the current loss function, which is usually not applicable but is in this case). When applied to Eq. (4), it is possible to find the best response dkπ using standard RL techniques since dkπ = arg min dπ∈K Lk(dπ, λk) = arg min dπ∈K dπ · λk − f∗(λk) = arg max dπ∈K dπ · (−λk), which is an RL problem for maximizing the reward (−λk). In principle, any RL algorithm that eventually solves the RL problem can be used to find the best response, which substantiates our claim in the introduction. For example, tabular Q-learning executed for sufficiently long and with a suitable exploration strategy will converge to the optimal policy [72]. In the non-tabular case we could parameterize a deep neural network to represent the Q-values [45] and if the network has sufficient capacity then similar guarantees might hold. We make no claims on efficiency or tractability of this approach, just that in principle such an approach would provide the best-response at each iteration and therefore satisfy the required conditions to solve the convex MDP problem. 4.2.2 Approximate Best Response The caveat in using the best response as a policy player is that in practice, it can only be found approximately by executing an RL algorithm in the environment. This leads to defining an approximate best response via the Probably Approximately Correct (PAC) framework. We say that a policy player is PAC( , δ), if it finds an -optimal policy to an RL problem with probability of at least 1 − δ. In addition, we say that a policy π′ is -optimal if its state occupancy d′π is such that max dπ∈K dπ · (−λk)− d′π · (−λk) ≤ . For example, the algorithm in [40] can find an -optimal policy to the discounted RL problem after seeing O ( SA (1−γ)3 2 log( 1 δ ) ) samples; and the algorithm in [36] can find an -optimal policy for the average reward RL problem after seeing O ( t2mixSA 2 log( 1 δ ) ) samples, where tmix is the mixing time (see, eg, [42, 76] for a formal definition). The following Lemma analyzes the sample complexity of Algorithm 1 with an approximate best response policy player for the average reward RL problem [36]. The result can be easily extended to the discounted case using the algorithm in [40]. Other relaxations to the best response for specific algorithms can be found in [65, 44, 33, 30]. Lemma 2 (The sample complexity of approximate best response in convex MDPs with average occupancy measure). For a convex function f , running Algorithm 1 with an oracle cost player with regret R̄λK = O(1/K) and an approximate best response policy player that solves the average reward RL problem in iteration k to accuracy k = 1/k returns an occupancy measure d̄Kπ that satisfies f(d̄Kπ )− fOPT ≤ with probability 1− δ after seeing O(t2mixSA log(2K/ δ)/ 3δ3) samples. Similarly, for R̄λK = O(1/ √ K), setting k = 1/ √ k requires O(t2mixSA log(2K/ δ)/ 4δ4) samples. 4.2.3 Non-Stationary RL Algorithms We now discuss a different type of policy players; instead of solving an MDP to accuracy , these algorithms perform a single RL update to the policy, with cost −λk. In our setup the reward is known and deterministic but non-stationary, while in the standard RL setup it is unknown, stochastic, and stationary. We conjecture that any RL algorithm can be adapted to the known non-stationary reward setup we consider here. In most cases both Bayesian [51, 48] and frequentist [8, 35] approaches to the stochastic RL problem solve a modified (eg, by adding optimism) Bellman equation at each time period and swapping in a known but non-stationary reward is unlikely to present a problem. To support this conjecture we shall prove that this is exactly the case for UCRL2 [35]. UCRL2 is an RL algorithm that was designed and analyzed in the standard RL setup, and we shall show that it is easily adapted to the non-stationary but known reward setup that we require. To make this claim more general, we will also discuss a similar result for the MDPO algorithm [61] that was given in a slightly different setup. UCRL2 is a model based algorithm that maintains an estimate of the reward and the transition function as well as confidence sets about those estimates. In our case the reward at time k is known, so we only need to consider uncertainty in the dynamics. UCRL2 guarantees that in any iteration k, the true transition function is in a confidence set with high probability, i.e., P ∈ Pk for confidence set Pk. If we denote by JP,Rπ the value of policy π in an MDP with dynamics P and reward R then the optimistic policy is π̃k = arg maxπ maxP ′∈Pk J P ′,−λk π . Acting according to this policy is guaranteed to attain low regret. In the following results for UCRL2 we will use the constant D, which denotes the diameter of the MDP, see [35, Definition 1] for more details. In the supplementary material (Appendix E), we provide a proof sketch that closely follows [35]. Lemma 3 (Non stationary regret of UCRL2). For an MDP with dynamics P, diameter D, an arbitrary sequence of known and bounded rewards { ri : maxs,a |ri(s, a)| ≤ 1 }K i=1 , such that the optimal average reward at episode k, with respect to P and rk is J?k , then with probability at least 1−δ, the average regret of UCRL2 is at most R̄K = 1K ∑K k=1 J ? k−J π̃k k ≤ O(DS √ A log(K/δ)/K). Next, we give a PAC( , δ) sample complexity result for the mixed policy π̄K , that is produced by running Algorithm 1 with UCRL2 as a policy player. Lemma 4 (The sample complexity of non-stationary RL algorithms in convex MDPs). For a convex function f, running Algorithm 1 with an oracle cost player with regret R̄λK ≤ c0/ √ K and UCRL2 as a policy player returns an occupancy measure d̄Kπ that satisfies f(d̄ K π )− fOPT ≤ with probability 1− δ after K = O ( D2S2A δ2 2 log( 2DSA δ ) ) steps. MDPO. Another optimistic algorithm is Mirror Descent Policy Optimization [60, MDPO]. MDPO is a model free RL algorithm that is very similar to popular DRL algorithms like TRPO [58] and MPO [2]. In [24, 59, 5], the authors established the global convergence of MDPO and in [15, 60], the authors showed that MDPO with optimistic exploration enjoys low regret. The analysis for MDPO is given in a finite horizon MDP with horizon H , which is not the focus of our paper. Nevertheless, to support our conjecture that any stochastic RL algorithm can be adapted to the known non-stationary reward setup, we quickly discuss the regret of MDPO in this setup. We also note that MDPO is closer to practical DRL algorithms [70]. In a finite horizon MDP with horizon H and known, non-stationary and bounded rewards, the regret of MDPO is bounded by R̄K ≤ O(H2S √ A/K) [61, Lemma 4] with high probability. To compare this result with UCRL2, we refer to a result from [57], which analyzed UCRL2 in the adversarial setup, that includes our setup as a special case. In a finite horizon MDP with horizon H it was shown that setting δ = SA/K with probability 1 − δ its regret is bounded by R̄K ≤ O(HS √ A log(K)/K) [57, Corollary 5], which is better by a factor of H than MDPO. Discussion. Comparing the results in Lemma 4 with Lemma 2 suggests that using an RL algorithm with non stationary reward as a policy player requires O(1/ 2) samples to find an −optimal policy, while using an approximate best response requires O(1/ 3). In first glance, this results also improves the previously best known result of Hazan et al. [30] for approximate Frank-Wolfe (FW) that requires O(1/ 3) samples. However, there are more details that have to be considered as we now discuss. Firstly, Lemma 4 and Lemma 2 assume access to an oracle cost player with some regret and do not consider how to implement such a cost player. The main challenge is that the cost player does not have access to the true state occupancy and must estimate it from samples. If we do not reuse samples from previous policies to estimate the state occupancy of the current policy we will require O(1/ 3) trajectories overall [30]. A better approach would use the samples from previous episodes to learn the transition function. Then, given the estimated transition function and the policy, we can compute an approximation of the state occupancy. We conjecture that such an approach would lead to a O(1/ 2) sample complexity, closing the gap with standard RL. Secondly, while our focus is on the dependence in , our bound Lemma 4 is not tight in δ, i.e., it scales with 1/δ2 where it should be possible to achieve a log(1/δ) scaling. Again we conjecture an improvement in the bound is possible; see, eg, [38, Appendix F.]. 5 Convex Constraints We have restricted the presentation so far to unconstrained convex problems, in this section we extend the above results to the constrained case. The problem we consider is min dπ∈K f(dπ) subject to gi(dπ) ≤ 0, i = 1, . . .m, where f and the constraint functions gi are convex. Previous work focused on the case where both f and gi are linear [7, 67, 12, 68, 18, 16, 11]. We can use the same Fenchel dual machinery we developed before, but now taking into account the constraints. Consider the Lagrangian L(dπ, µ) = f(dπ) + ∑m i=1 µigi(dπ) = max ν (ν · dπ − f∗(ν)) + ∑m i=1 µi max vi (dπvi − g∗i (vi)) . over dual variables µ ≥ 0, with new variables vi and ν. At first glance this does not look convexconcave, however we can introduce new variables ζi = µivi to obtain L(dπ, µ, ν, ζ1, . . . , ζm) = ν · dπ − f∗(ν) + ∑m i=1 (dπζi − µig∗i (ζi/µi)) . (7) This is convex (indeed affine) in dπ and concave in (ν, µ, ζ1, . . . , ζm), since it includes the perspective transform of the functions gi [13]. The Lagrangian involves a cost vector, ν + ∑m i=1 ζi, linearly interacting with dπ, and therefore we can use the same policy players as before to minimize this cost. For the cost player, it is possible to use OMD on Eq. (7) jointly for the variables ν, µ and ζ. It is more challenging to use best-response and FTL for the cost-player variables as the maximum value of the Lagrangian is unbounded for some values of dπ. Another option is to treat the problem as a three-player game. In this case the policy player controls dπ as before, one cost player chooses (ν, ζ1, . . . , ζm) and can use the algorithms we have previously discussed, and the other cost player chooses µ with some restrictions on their choice of algorithm. Analyzing the regret in that case is outside the scope of this paper. 6 Examples In this section we explain how existing algorithms can be seen as instances of the meta-algorithm for various choices of the objective function f and the cost and policy player algorithms Algλ and Algπ . We summarized the relationships in Table 1. 6.1 Apprenticeship Learning In apprenticeship learning (AL), we have an MDP without an explicit reward function. Instead, an expert provides demonstrations which are used to estimate the expert state occupancy measure dE . Abbeel and Ng [1] formalized the AL problem as finding a policy π whose state occupancy is close to that of the expert by minimizing the convex function f(dπ) = ||dπ − dE ||. The convex conjugate of f is given by f∗(y) = y · dE if ||y||∗ ≤ 1 and∞ otherwise, where || · ||∗ denotes the dual norm. Plugging f∗ into Eq. (4) results in the following game: min dπ∈K ||dπ − dE || = min dπ∈K max ||λ||∗≤1 λ · dπ − λ · dE . (8) Inspecting Eq. (8), we can see that the norm in the function f that is used to measure the distance from the expert induces a constraint set for the cost variable, which is a unit ball in the dual norm. Algλ=OMD, Algπ=Best Response/RL. The Multiplicative Weights AL algorithms [65, MWAL] was proposed to solve the AL problem with f(dπ) = ||dπ − dE ||∞. It uses the best response as the policy player and multiplicative weights as the cost player (a special case of OMD). MWAL has also been used to solve AL in contextual MDPs [10] and to find feasible solutions to convex-constrained MDPs [44]. We note that in practice the best response can only be solved approximately, as we discussed in Section 4. Instead, in online AL [61] the authors proposed to use MDPO as the policy player, which guarantees a regret bound of R̄K ≤ c/ √ K. They showed that their algorithm is equivalent to Wasserstein GAIL [73, 78] and in practice tends to perform similarly to GAIL. Algλ=FTL, Algπ=Best Response. When the policy player plays the best response and the cost player plays FTL, Algorithm 1 is equivalent to the Frank-Wolfe algorithm [22, 3] for minimizing f (Eq. (2)). Pseudo-code for this is included in the appendix (Algorithm 3). The algorithm finds a point dkπ ∈ K that has the largest inner-product (best response) with the negative gradient (i.e., FTL). Abbeel and Ng [1] proposed two algorithms for AL, the projection algorithm and the max margin algorithm. The projection algorithm is essentially a FW algorithm, as was suggested in the supplementary [1] and was later shown formally in [75]. Thus, it is a projection free algorithm in the sense that it avoids projecting dπ into K, despite the name. In their case the gradient is given by ∇f (dπ) = dπ − dE . Thus, finding the best response is equivalent to solving an MDP whose reward is dE − dπ . In a similar fashion, FW can be used to solve convex MDPs more generally [30]. Specifically, in [30], the authors considered the problem of pure exploration, which they defined as finding a policy that maximizes entropy. Fully Corrective FW. The FW algorithm has many variants (see [33] for a survey) some of which enjoy faster rates of convergence in special cases. Concretely, when the constraint set is a polytope, which is the case for convex MDPs (Definition 1), some variants achieve a linear rate of convergence [34, 75]. One such variant is the Fully corrective FW, which replaces the learning rate update (see line 4 of Algorithm 3 in the supplementary), with a minimization problem over the convex hull of occupancy measures at the previous time-step. This is guaranteed to be at least as good as the learning rate update. Interestingly, the second algorithm of Abbeel and Ng [1], the max margin algorithm, is exactly equivalent to this fully corrective FW variant. This implies that the max-margin algorithm enjoys a better theoretical convergence rate than the ‘projection’ variant, as was observed empirically in [1]. 6.2 GAIL and DIAYN: Algλ=FTL, Algπ=RL We now discuss the objectives of two popular algorithms, GAIL [31] and DIAYN [20], which perform AL and diverse skill discovery respectively. Our analysis suggests that GAIL and DIAYN share the same objective function. In GAIL, this objective function is minimized, which is a convex MDP, however, in DIAYN it is maximized, which is therefore not a convex MDP. We start the discussion with DIAYN and follow with a simple construction showing the equivalence to GAIL. DIAYN. Discriminative approaches [26, 20] rely on the intuition that skills are diverse when they are entropic and easily discriminated by observing the states that they visit. Given a probability space (Ω,F ,P), state random variables S : Ω→ S and latent skills Z : Ω→ Z with prior p, the key term of interest being maximized in DIAYN [20] is the mutual information: I(S;Z) = Ez∼p;s∼dzπ [log p(z|s)− log p(z)], (9) where dzπ is the stationary distribution induced by the policy π(a | s, z). For each skill z, this corresponds to a standard RL problem with (conditional) policy π(a | s, z) and reward function r(s|z) = log p(z|s) − log p(z). The first term encourages the policy to visit states for which the underlying skill has high-probability under the posterior p(z | s), while the second term ensures a high entropy distribution over skills. In practice, the full DIAYN objective further regularizes the learnt policy by including entropy terms − log π(a | s, z). For large state spaces, p(z|s) is typically intractable and Eq. 9 is replaced with a variational lower-bound, where the true posterior is replaced with a learned discriminator qφ(z|s). Here, we focus on the simple setting where z is a categorical distribution over |Z| outcomes, yielding |Z| policies πz , and qφ is a classifier over these |Z| skills with parameters φ. We now show that a similar intrinsic reward can be derived using the framework of convex MDPs. We start by writing the true posterior as a function of the per-skill state occupancy dzπ = p(s | z), and using Bayes rules, p(z|s) = d z π(s)p(z)∑ k d k π(s)p(k) . Combing this with Eq. (9) yields: Ez∼p(z),s∼dzπ [log p(z|s)− p(z)] = ∑ z p(z) ∑ s dzπ(s) [ log ( dzπ(s)p(z)∑ k d k π(s)p(k) ) − log p(z) ] = ∑ z p(z)KL(dzπ|| ∑ k p(k)dkπ) = EzKL(dzπ||Ekdkπ), (10) where KL denotes the Kullback–Leibler divergence [39]. Intuitively, finding a set of policies π1, . . . , πz that minimize Eq. (10) will result in finding policies that visit similar states, measured using the KL distance between their respective state occupancies d1π, . . . , d z π. This is a convex MDP because the KL-divergence is jointly convex in both arguments [13, Example 3.19]. We will soon show that this is the objective of GAIL. On the other hand, a set of policies that maximize Eq. (10) is diverse, as the policies visit different states, measured using the KL distance between their respective state occupancies d1π, . . . , d z π . We follow on with deriving the FTL player for the convex MDP in Eq. (10). We will then show that this FTL player is producing an intrinsic reward that is equivalent to the intrinsic reward used in GAIL and DIAYN (despite the fact that DIAYN is not a convex MDP). According to Eq. (6), the FTL cost player will produce a cost λk at iteration k given by ∇dzπKL(d z π|| ∑ k p(k)dkπ) = E z∼p(z) [ log dzπ∑ k d k πp(k) + 1− d z πp(z)∑ k d k πp(k) ] = E z∼p(z) [ log(p(z|s))− log(p(z))︸ ︷︷ ︸ Mutual Information +1− p(z|s)︸ ︷︷ ︸ Gradient correction ] , (11) where the equality follows from writing the posterior as a function of the per-skill state occupancy dzπ = p(s | z), and using Bayes rules, p(z|s) = dzπ(s)p(z)∑ k d k π(s)p(k) . Replacing the posterior p(z|s) with a learnt discriminator qφ(z|s) recovers the mutual-information rewards of DIAYN, with additional terms 1− p(z | s) which we refer to as “gradient correction” terms. Inspecting the common scenario of a uniform prior over the latent variables, p(z) = 1/|Z|, we get that the expectation of the gradient correction term ∑ z p(z)(1− p(z|s)) = 1− 1/|Z| in each state. From the perspective of the policy player, adding a constant to the reward does not change the best response policy, nor the optimistic policy. Therefore, the gradient correction term does not have an effect on the optimization under a uniform prior, and we retrieved the reward of DIAYN. These algorithms differ however for more general priors p(z), which we explore empirically in Appendix F. GAIL. We further show how Eq. (10) extends to GAIL [31] via a simple construction. Consider a binary latent space of size |Z| = 2, where z = 1 corresponds to the policy of the agent and z = 2 corresponds to the policy of the expert which is fixed. In addition, consider a uniform prior over the latent variables, i.e., p(z = 1) = 12 . By removing the constant terms in Eq. (11), one retrieves the GAIL [31] algorithm. The cost log(p(z|s)) is the probability of the discriminator to identify the agent, and the policy player is MDPO (which is similar to TRPO in GAIL). 7 Discussion In this work we reformulated the convex MDP problem as a convex-concave game between the agent and another player that is producing costs (negative rewards) and proposed a meta-algorithm for solving it. We observed that many algorithms in the literature can be interpreted as instances of the metaalgorithm by selecting different pairs of subroutines employed by the policy and cost players. The Frank-Wolfe algorithm, which combines best response with FTL, was originally proposed for AL [1, 75] but can be used for any convex MDP problem as was suggested in [30]. Zhang et al. [77], unified the problems of RL, AL, constrained MDPs with linear constraints and maximum entropy exploration under the framework of convex MDPs. We extended the framework to allow convex constraints (Section 5) and explained the objective of GAIL as a convex MDP (Section 6.2). We also discussed non convex objectives (Section 3) and analyzed unsupervised skill discovery via the maximization of mutual information (Section 6.2) as a special case. Finally, we would like to point out a recent work by Geist et al. [25], which was published concurrently to ours, and studies the convex MDP problem from the viewpoint of mean field games. There are also algorithms for convex MDPs that cannot be explained as instances of Algorithm 1. In particular, Zhang et al. [77] proposed a policy gradient algorithm for convex MDPs in which each step of policy gradient involves solving a new saddle point problem (formulated using the Fenchel dual). This is different from our approach since we solve a single saddle point problem iteratively, and furthermore we have much more flexibility about which algorithms the policy player can use. Moreover, for the convergence guarantee [77, Theorem 4.5] to hold, the saddle point problem has to be solved exactly, while in practice it is only solved approximately [77, Algorithm 1], which hinders its sample efficiency. Fenchel duality has also been used in off policy evaluation (OPE) in [46, 74]. The difference between these works and ours is that we train a policy to minimize an objective, while in OPE a target policy is fixed and its value is estimated from data produced by a behaviour policy. In order to solve a practical convex MDP problem in a given domain it would be prudent to use an RL algorithm that is known to be high performing for the vanilla RL problem as the policy player. From the theoretical point of view this could be MDPO or UCRL2, which we have shown come with strong guarantees. From the practical point of view using a high performing DRL algorithm, which may be specific to the domain, will usually yield the best results. For the cost player using FTL, i.e., using the gradient of the objective function, is typically the best choice. Acknowledgments and Disclosure of Funding We would like to thank Yasin Abbasi-Yadkorie, Vlad Mnih, Jacob Abernethy, Lior Shani and Doina Precup for their comments and discussion on this work. Work done at DeepMind, the authors received no specific funding for this work.
1. What is the focus and contribution of the paper regarding convex optimization in MDPs? 2. How does the proposed approach compare to prior works, specifically the SODA 2015 paper by Agrawal and Devanur? 3. Are there any concerns regarding the novelty of the paper's algorithm, particularly in relation to existing works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any minor concerns or suggestions for improvement regarding the presentation of the algorithm or its components?
Summary Of The Paper Review
Summary Of The Paper The authors propose to consider a convex optimization problem over the polytope formed by the state-visitation frequency of an MDP. They constructed algorithms based on existing tools of sub-gradient descents, such as Frank-Wolfe, FTL and OMD, in conjunction with an oracle that (approximate)-solves an MDP with scalar rewards. Altogether, they provide a meta-algorithm that translates a scalar reward MDP oracle to an algorithm for the convex MDP. The authors also highlight several applications, such as the Apprenticeship Learning problem and the constrained MDP problem, of their formulation. Review While the authors propose an interesting problem, I have a major concern over the novelty of the paper as compared to the following SODA 2015 paper by Agrawal and Devanur 2015: https://arxiv.org/abs/1410.7596 The authors' convex optimization can be phrased in their problem setting (in their Definition 1), by putting A t = K for all t . (And also rephrasing convex minimization as concave maximization). Algorithm 5.1 in (Agrawal and Devanur 2015) solves convex optimization in Definition 1. The algorithm only requires an oracle that outputs solution to a linear optimization (the definition of v t † in Algorithm 5.1) over the feasible region A t . The framework in the (Agrawal and Devanur 2015) is quite similar to what is proposed in the paper in the sense that both allow a wide suit of (sub)-gradient descent algorithms, as highlighted in their Section 3. In this regard, the algorithm design and analysis of Algorithm 1 appear to be fundamentally based on existing works. A difference between the two papers would be that, while Agrawal and Devanur 2015 assume an exact solver in their generation of v t † , while the submission allows an approximate solver. Another is that, the submission proposes the use of non-stationary RL algorithms, and they also highlighted how certain objective (for example, the objective based on mutual information in Section 5.2) can be cast under their framework. Nevertheless, in my opinion Algorithm 1 is the main technical part of the paper, and the two differences highlighted previously are secondary in comparison (this is not to say that they are unimportant, but I saw Algorithm 1 in the paper as the main tool that solves the proposed problem). In addition, there is also a minor concern on the presentation of Algorithm 1. The authors display the Algorithm 1, which generates a sequence of occupancy measures d π k for k = 1 , … , K . For any occupancy measure d , it is asserted in Line 73 that they recover a policy π that generates d by setting π ( s , a ) = d ( s , a ) ∑ a ∈ A d ( s , a ) . But what if ∑ a ∈ A d ( s , a ) = 0 for a state s ? The authors should address this technicality in Line 73. Nevertheless, I do not think it is a technical issue for their Algorithm 1, in the sense that the authors first outputs a policy π k that solves the maximizing objective when the scalarization is − λ k , only then they report the occupancy measure of π k as d π k .
NIPS
Title Reward is enough for convex MDPs Abstract Maximising a cumulative reward function that is Markov and stationary, i.e., defined over state-action pairs and independent of time, is sufficient to capture many kinds of goals in a Markov decision process (MDP). However, not all goals can be captured in this manner. In this paper we study convex MDPs in which goals are expressed as convex functions of the stationary distribution and show that they cannot be formulated using stationary reward functions. Convex MDPs generalize the standard reinforcement learning (RL) problem formulation to a larger framework that includes many supervised and unsupervised RL problems, such as apprenticeship learning, constrained MDPs, and so-called ‘pure exploration’. Our approach is to reformulate the convex MDP problem as a min-max game involving policy and cost (negative reward) ‘players’, using Fenchel duality. We propose a meta-algorithm for solving this problem and show that it unifies many existing algorithms in the literature. 1 Introduction In reinforcement learning (RL), an agent learns how to map situations to actions so as to maximize a cumulative scalar reward signal. The learner is not told which actions to take, but instead must discover which actions lead to the most reward [64]. Mathematically, the RL problem can be written as finding a policy whose state occupancy has the largest inner product with a reward vector [55], i.e., the goal of the agent is to solve RL: max dπ∈K ∑ s,a r(s, a)dπ(s, a), (1) where dπ is the state-action stationary distribution induced by policy π and K is the set of admissible stationary distributions (see Definition 1). A significant body of work is dedicated to solving the RL problem efficiently in challenging domains [45, 62]. However, not all decision making problems of interest take this form. In particular we consider the more general convex MDP problem, Convex MDP: min dπ∈K f(dπ), (2) where f : K → R is a convex function. Sequential decision making problems that take this form include Apprenticeship Learning (AL), pure exploration, and constrained MDPs, among others; see Table 1. In this paper we prove the following claim: We can solve Eq. (2) by using any algorithm that solves Eq. (1) as a subroutine. In other words, any algorithm that solves the standard RL problem can be used to solve the more general convex MDP problem. More specifically, we make the following contributions. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Firstly, we adapt the meta-algorithm of Abernethy and Wang [3] for solving Eq. (2). The key idea is to use Fenchel duality to convert the convex MDP problem into a two-player zero-sum game between the agent (henceforth, policy player) and an adversary that produces rewards (henceforth, cost player) that the agent must maximize [3, 6]. From the agent’s point of view, the game is bilinear, and so for fixed rewards produced by the adversary the problem reduces to the standard RL problem with non-stationary reward (Fig. 1). Secondly, we propose a sample efficient policy player that uses a standard RL algorithm (eg, [35, 60]), and computes an optimistic policy with respect to the non-stationary reward at each iteration. In other words, we use algorithms that were developed to achieve low regret in the standard RL setup, to achieve low regret as policy players in the min-max game we formulate to solve the convex MDP. Our main result is that the average of the policies produced by the policy player converges to a solution to the convex MDP problem (Eq. (2)). Inspired by this principle, we also propose a recipe for using deep-RL (DRL) agents to solve convex MDPs heuristically: provide the agent non-stationary rewards from the cost player. We explore this principle in our experiments. Finally, we show that choosing specific algorithms for the policy and cost players unifies several disparate branches of RL problems, such as apprenticeship learning, constrained MDPs, and pure exploration into a single framework, as we summarize in Table 1. 2 Reinforcement Learning Preliminaries In RL an agent interacts with an environment over a number of time steps and seeks to maximize its cumulative reward. We consider two cases, the average reward case and the discounted case. The Markov decision process (MDP) is defined by the tuple (S,A, P,R) for the average reward case and by the tuple (S,A, P,R, γ, d0) for the discounted case. We assume an infinite horizon, finite state-action problem where initially, the state of the agent is sampled according to s0 ∼ d0, then at each time t the agent is in state st ∈ S, selects action at ∈ A according to some policy π(st, ·), receives reward rt ∼ R(st, at) and transitions to new state st+1 ∈ S according to the probability distribution P (·, st, at). The two performance metrics we consider are given by Javgπ = lim T→∞ 1 T E T∑ t=1 rt, J γ π = (1− γ)E ∞∑ t=1 γtrt, (3) for the average reward case and discounted case respectively. The goal of the agent is to find a policy that maximizes Javgπ or J γ π . Any stationary policy π induces a state-action occupancy measure dπ, which measures how often the agent visits each state-action when following π. Let Pπ(st = ·) be the probability measure over states at time t under policy π, then davgπ (s, a) = lim T→∞ 1 T E T∑ t=1 Pπ(st = s)π(s, a), dγπ(s, a) = (1− γ)E ∞∑ t=1 γtPπ(st = s)π(s, a), for the average reward case and the discounted case respectively. With these, we can rewrite the RL objective in Eq. (3) in terms of the occupancy measure using the following well-known result, which for completeness we prove in Appendix B. Proposition 1. For both the average and the discounted case, the agent objective function Eq. (3) can be written in terms of the occupancy measure as Jπ = ∑ s,a r(s, a)dπ(s, a). Given an occupancy measure it is possible to recover the policy by setting π(s, a) = dπ(s, a)/ ∑ a dπ(s, a) if ∑ a dπ(s, a) > 0, and π(s, a) = 1/|A| otherwise. Accordingly, in this paper we shall formulate the RL problem using the state-action occupancy measure, and both the standard RL problem (Eq. (1)) and the convex MDP problem (Eq. (2)) are convex optimization problems in variable dπ. For the purposes of this manuscript we do not make a distinction between the average and discounted settings, other than through the convex polytopes of feasible occupancy measures, which we define next. Definition 1 (State-action occupancy’s polytope [55]). For the average reward case the set of admissible state-action occupancies is Kavg = {dπ | dπ ≥ 0, ∑ s,a dπ(s, a) = 1, ∑ a dπ(s, a) = ∑ s′,a′ P (s, s′, a′)dπ(s ′, a′) ∀s ∈ S}, and for the discounted case it is given by Kγ = {dπ | dπ ≥ 0, ∑ a dπ(s, a) = (1− γ)d0(s) + γ ∑ s′,a′ P (s, s′, a′)dπ(s ′, a′) ∀s ∈ S}. We note that being a polytope implies that K is a convex and compact set. The convex MDP problem is defined for the tuple (S,A, P, f) in the average cost case and (S,A, P, f, γ, d0) in the discounted case. This tuple is defining a state-action occupancy’s polytope K (Definition 1), and the problem is to find a policy π whose state occupancy dπ is in this polytope and minimizes the function f (Eq. (2)). 3 A Meta-Algorithm for Solving Convex MDPs via RL To solve the convex MDP problem (Eq. (2)) we need to find an occupancy measure dπ (and associated policy) that minimizes the function f . Since both f : K → R and the set K are convex this is a convex optimization problem. However, it is a challenging one due to the nature of learning about the environment through stochastic interactions. In this section we show how to reformulate the convex MDP problem (Eq. (2)) so that standard RL algorithms can be used to solve it, allowing us to harness decades of work on solving vanilla RL problems. To do that we will need the following definition. Definition 2 (Fenchel conjugate). For a function f : Rn → R ∪ {−∞,∞}, its Fenchel conjugate is denoted f∗ : Rn → R ∪ {−∞,∞} and defined as f∗(x) := supy x · y − f(y). Remark 1. The Fenchel conjugate function f∗ is always convex (when it exists) even if f is not. Furthermore, the biconjugate f∗∗ := (f∗)∗ equals f if and only if f is convex and lower semicontinuous. Using this we can rewrite the convex MDP problem (Eq. (2)) as fOPT = min dπ∈K f(dπ) = min dπ∈K max λ∈Λ (λ · dπ − f∗(λ)) = max λ∈Λ min dπ∈K (λ · dπ − f∗(λ)) (4) where Λ is the closure of (sub-)gradient space {∂f(dπ)|dπ ∈ K}, which is a convex set [3, Theorem 4]. As both sets are convex, this is a convex-concave saddle-point problem and a zero-sum two-player game [54, 49], and we were able to swap the order of minimization and maximization using the minimax theorem [71]. With this we define the Lagrangian as L(dπ, λ) := λ · dπ − f∗(λ). For a fixed λ ∈ Λ, minimizing the Lagrangian is a standard RL problem of the form of Eq. (1), i.e., equivalent to maximizing a reward r = −λ. Thus, one might hope that by producing an optimal dual variable λ? we could simply solve d?π = argmindπ∈K L(·, λ ?) for the optimal occupancy measure. However, the next lemma states that this is not possible in general. Lemma 1. There exists an MDP M and convex function f for which there is no stationary reward r ∈ RS×A such that arg maxdπ∈K dπ · r = arg mindπ∈K f(dπ). To see this note that for any reward r there is a deterministic policy that optimizes the reward [55], but for some choices of f no deterministic policy is optimal, eg, when f is the negative entropy function. This result tells us that even if we have access to an optimal dual-variable we cannot simply use it to recover the stationary distribution that solves the convex MDP problem in general. To overcome this issue we develop an algorithm that generates a sequence of policies {πk}k∈N such that the average converges to an optimal policy for Eq. (2), i.e., (1/K) ∑K k=1 d k π → d?π ∈ arg mindπ∈K f(dπ). The algorithm we develop is described in Algorithm 1 and is adapted from the meta-algorithm described in Abernethy and Wang [3]. It is referred to as a meta-algorithm since it relies on supplied sub-routine algorithms Algπ and Algλ. The reinforcement learning algorithm Algπ takes as input a reward vector and returns a state-action occupancy measure dπ. The cost algorithm Algλ can be a more general function of the entire history. We discuss concrete examples of Algπ and Algλ in Section 4. Algorithm 1: meta-algorithm for convex MDPs 1: Input: convex-concave payoff L : K × Λ→ R, algorithms Algλ,Algπ , K ∈ N 2: for k = 1, . . . ,K do 3: λk = Algλ(d 1 π, . . . , d k−1 π ;L) 4: dkπ = Algπ(−λk) 5: end for 6: Return d̄Kπ = 1 K ∑K k=1 d k π, λ̄ K = 1K ∑K k=1 λ k In order to analyze this algorithm we will need a small detour into online convex optimization (OCO). In OCO, a learner is presented with a sequence of K convex loss functions `1, `2, . . . , `K : K → R and at each round k must select a point xk ∈ K after which it suffers a loss of `k(xk). At time period k the learner is assumed to have perfect knowledge of the loss functions `1, . . . , `k−1. The learner wants to minimize its average regret, defined as R̄K := 1 K ( K∑ k=1 `k(xk)−min x∈K K∑ k=1 `k(x) ) . In the context of convex reinforcement learning and meta-algorithm 1, the loss functions for the cost player are `kλ = −L(·, λk), and for the policy player are `kπ = L(dkπ, ·), with associated average regrets R̄πK and R̄ λ K . This brings us to the following theorem. Theorem 1 (Theorem 2, [3]). Assume that Algπ and Algλ have guaranteed average regret bounded as R̄πK ≤ K and R̄λK ≤ δK , respectively. Then Algorithm 1 outputs d̄Kπ and λ̄K satisfying mindπ∈K L(dπ, λ̄K) ≥ fOPT − K − δK and maxλ∈Λ L(d̄Kπ , λ) ≤ fOPT + K + δK . This theorem tells us that so long as the RL algorithm we employ has guaranteed low-regret, and assuming we choose a reasonable low-regret algorithm for deciding the costs, then the meta-algorithm will produce a solution to the convex MDP problem (Eq. (2)) to any desired tolerance, this is because fOPT ≤ f(d̄Kπ ) = maxλ L(d̄Kπ , λ) ≤ fOPT + K +δK . For example, we shall later present algorithms that have regret bounded as K = δK ≤ O(1/ √ K), in which case we have f(d̄Kπ )− fOPT ≤ O(1/ √ K). (5) Non-Convex f . Remark 1 implies that the game maxλ∈Λ mindπ∈K (λ · dπ − f∗(λ)) is concaveconvex for any function f , so we can solve it with Algorithm 1, even for a non-convex f . From weak duality the value of the Lagrangian on the output of Algorithm 1, L(d̄π, λ̄), is a lower bound on the optimal solution fOPT. In addition, since f(dπ) is always an upper bound on fOPT we have both an upper bound and a lower bound on the optimal value: L(d̄π, λ̄) ≤ fOPT ≤ f(d̄π). 4 Policy and Cost Players for Convex MDPs In this section we present several algorithms for the policy and cost players that can be used in Algorithm 1. Any combination of these algorithms is valid and will come with different practical and theoretical performance. In Section 6 we show that several well known methods in the literature correspond to particular choices of cost and policy players and so fall under our framework. In addition, in this section we assume that λmax = max λ∈Λ max s,a |λ(s, a)| <∞, which holds when the set Λ is compact. One way to guarantee that Λ is compact is to consider functions f with Lipschitz continuous gradients (which implies bounded gradients since the set K is compact). For simplicity, we further assume that λmax ≤ 1. By making this assumption we assure that the non stationary rewards produced by the cost player are bounded by 1 as is usually done in RL. 4.1 Cost Player Follow the Leader (FTL) is a classic OCO algorithm that selects λk to be the best point in hindsight. In the special case of convex MDPs, as defined in Eq. (4), FTL has a simpler form: λk = arg max λ∈Λ ∑k−1 j=1 L(djπ, λ) = arg max λ∈Λ ( λ · ∑k−1 j=1 djπ −Kf∗(λ) ) = ∇f(d̄k−1π ), (6) where d̄k−1π = ∑k−1 j=1 d j π and the last equality follows from the fact that (∇f∗)−1 = ∇f [56]. The average regret of FTL is guaranteed to be R̄K ≤ c/ √ K under some assumptions [29]. In some cases, and specifically when the set K is a polytope and the function f is strongly convex, FTL can enjoy logarithmic or even constant regret; see [32, 29] for more details. Online Mirror Descent (OMD) uses the following update [47, 9]: λk = arg max λ∈Λ ( (λ− λk−1) · ∇λL(dk−1π , λk−1) + αkBr(λ, λk−1) ) , where αk is a learning rate and Br is a Bregman divergence [14]. For Br(x) = 0.5||x||22, we get online gradient descent [79] and for Br(x) = x · log(x) we get multiplicative weights [23] as special cases. We also note that OMD is equivalent to a linearized version of Follow the Regularized Leader (FTRL) [43, 28]. The average regret of OMD is R̄K ≤ c/ √ K under some assumptions, see, for example [28]. 4.2 Policy Players 4.2.1 Best Response In OCO, the best response is to simply ignore the history and play the best option on the current round, which has guaranteed average regret bound of R̄K ≤ 0 (this requires knowledge of the current loss function, which is usually not applicable but is in this case). When applied to Eq. (4), it is possible to find the best response dkπ using standard RL techniques since dkπ = arg min dπ∈K Lk(dπ, λk) = arg min dπ∈K dπ · λk − f∗(λk) = arg max dπ∈K dπ · (−λk), which is an RL problem for maximizing the reward (−λk). In principle, any RL algorithm that eventually solves the RL problem can be used to find the best response, which substantiates our claim in the introduction. For example, tabular Q-learning executed for sufficiently long and with a suitable exploration strategy will converge to the optimal policy [72]. In the non-tabular case we could parameterize a deep neural network to represent the Q-values [45] and if the network has sufficient capacity then similar guarantees might hold. We make no claims on efficiency or tractability of this approach, just that in principle such an approach would provide the best-response at each iteration and therefore satisfy the required conditions to solve the convex MDP problem. 4.2.2 Approximate Best Response The caveat in using the best response as a policy player is that in practice, it can only be found approximately by executing an RL algorithm in the environment. This leads to defining an approximate best response via the Probably Approximately Correct (PAC) framework. We say that a policy player is PAC( , δ), if it finds an -optimal policy to an RL problem with probability of at least 1 − δ. In addition, we say that a policy π′ is -optimal if its state occupancy d′π is such that max dπ∈K dπ · (−λk)− d′π · (−λk) ≤ . For example, the algorithm in [40] can find an -optimal policy to the discounted RL problem after seeing O ( SA (1−γ)3 2 log( 1 δ ) ) samples; and the algorithm in [36] can find an -optimal policy for the average reward RL problem after seeing O ( t2mixSA 2 log( 1 δ ) ) samples, where tmix is the mixing time (see, eg, [42, 76] for a formal definition). The following Lemma analyzes the sample complexity of Algorithm 1 with an approximate best response policy player for the average reward RL problem [36]. The result can be easily extended to the discounted case using the algorithm in [40]. Other relaxations to the best response for specific algorithms can be found in [65, 44, 33, 30]. Lemma 2 (The sample complexity of approximate best response in convex MDPs with average occupancy measure). For a convex function f , running Algorithm 1 with an oracle cost player with regret R̄λK = O(1/K) and an approximate best response policy player that solves the average reward RL problem in iteration k to accuracy k = 1/k returns an occupancy measure d̄Kπ that satisfies f(d̄Kπ )− fOPT ≤ with probability 1− δ after seeing O(t2mixSA log(2K/ δ)/ 3δ3) samples. Similarly, for R̄λK = O(1/ √ K), setting k = 1/ √ k requires O(t2mixSA log(2K/ δ)/ 4δ4) samples. 4.2.3 Non-Stationary RL Algorithms We now discuss a different type of policy players; instead of solving an MDP to accuracy , these algorithms perform a single RL update to the policy, with cost −λk. In our setup the reward is known and deterministic but non-stationary, while in the standard RL setup it is unknown, stochastic, and stationary. We conjecture that any RL algorithm can be adapted to the known non-stationary reward setup we consider here. In most cases both Bayesian [51, 48] and frequentist [8, 35] approaches to the stochastic RL problem solve a modified (eg, by adding optimism) Bellman equation at each time period and swapping in a known but non-stationary reward is unlikely to present a problem. To support this conjecture we shall prove that this is exactly the case for UCRL2 [35]. UCRL2 is an RL algorithm that was designed and analyzed in the standard RL setup, and we shall show that it is easily adapted to the non-stationary but known reward setup that we require. To make this claim more general, we will also discuss a similar result for the MDPO algorithm [61] that was given in a slightly different setup. UCRL2 is a model based algorithm that maintains an estimate of the reward and the transition function as well as confidence sets about those estimates. In our case the reward at time k is known, so we only need to consider uncertainty in the dynamics. UCRL2 guarantees that in any iteration k, the true transition function is in a confidence set with high probability, i.e., P ∈ Pk for confidence set Pk. If we denote by JP,Rπ the value of policy π in an MDP with dynamics P and reward R then the optimistic policy is π̃k = arg maxπ maxP ′∈Pk J P ′,−λk π . Acting according to this policy is guaranteed to attain low regret. In the following results for UCRL2 we will use the constant D, which denotes the diameter of the MDP, see [35, Definition 1] for more details. In the supplementary material (Appendix E), we provide a proof sketch that closely follows [35]. Lemma 3 (Non stationary regret of UCRL2). For an MDP with dynamics P, diameter D, an arbitrary sequence of known and bounded rewards { ri : maxs,a |ri(s, a)| ≤ 1 }K i=1 , such that the optimal average reward at episode k, with respect to P and rk is J?k , then with probability at least 1−δ, the average regret of UCRL2 is at most R̄K = 1K ∑K k=1 J ? k−J π̃k k ≤ O(DS √ A log(K/δ)/K). Next, we give a PAC( , δ) sample complexity result for the mixed policy π̄K , that is produced by running Algorithm 1 with UCRL2 as a policy player. Lemma 4 (The sample complexity of non-stationary RL algorithms in convex MDPs). For a convex function f, running Algorithm 1 with an oracle cost player with regret R̄λK ≤ c0/ √ K and UCRL2 as a policy player returns an occupancy measure d̄Kπ that satisfies f(d̄ K π )− fOPT ≤ with probability 1− δ after K = O ( D2S2A δ2 2 log( 2DSA δ ) ) steps. MDPO. Another optimistic algorithm is Mirror Descent Policy Optimization [60, MDPO]. MDPO is a model free RL algorithm that is very similar to popular DRL algorithms like TRPO [58] and MPO [2]. In [24, 59, 5], the authors established the global convergence of MDPO and in [15, 60], the authors showed that MDPO with optimistic exploration enjoys low regret. The analysis for MDPO is given in a finite horizon MDP with horizon H , which is not the focus of our paper. Nevertheless, to support our conjecture that any stochastic RL algorithm can be adapted to the known non-stationary reward setup, we quickly discuss the regret of MDPO in this setup. We also note that MDPO is closer to practical DRL algorithms [70]. In a finite horizon MDP with horizon H and known, non-stationary and bounded rewards, the regret of MDPO is bounded by R̄K ≤ O(H2S √ A/K) [61, Lemma 4] with high probability. To compare this result with UCRL2, we refer to a result from [57], which analyzed UCRL2 in the adversarial setup, that includes our setup as a special case. In a finite horizon MDP with horizon H it was shown that setting δ = SA/K with probability 1 − δ its regret is bounded by R̄K ≤ O(HS √ A log(K)/K) [57, Corollary 5], which is better by a factor of H than MDPO. Discussion. Comparing the results in Lemma 4 with Lemma 2 suggests that using an RL algorithm with non stationary reward as a policy player requires O(1/ 2) samples to find an −optimal policy, while using an approximate best response requires O(1/ 3). In first glance, this results also improves the previously best known result of Hazan et al. [30] for approximate Frank-Wolfe (FW) that requires O(1/ 3) samples. However, there are more details that have to be considered as we now discuss. Firstly, Lemma 4 and Lemma 2 assume access to an oracle cost player with some regret and do not consider how to implement such a cost player. The main challenge is that the cost player does not have access to the true state occupancy and must estimate it from samples. If we do not reuse samples from previous policies to estimate the state occupancy of the current policy we will require O(1/ 3) trajectories overall [30]. A better approach would use the samples from previous episodes to learn the transition function. Then, given the estimated transition function and the policy, we can compute an approximation of the state occupancy. We conjecture that such an approach would lead to a O(1/ 2) sample complexity, closing the gap with standard RL. Secondly, while our focus is on the dependence in , our bound Lemma 4 is not tight in δ, i.e., it scales with 1/δ2 where it should be possible to achieve a log(1/δ) scaling. Again we conjecture an improvement in the bound is possible; see, eg, [38, Appendix F.]. 5 Convex Constraints We have restricted the presentation so far to unconstrained convex problems, in this section we extend the above results to the constrained case. The problem we consider is min dπ∈K f(dπ) subject to gi(dπ) ≤ 0, i = 1, . . .m, where f and the constraint functions gi are convex. Previous work focused on the case where both f and gi are linear [7, 67, 12, 68, 18, 16, 11]. We can use the same Fenchel dual machinery we developed before, but now taking into account the constraints. Consider the Lagrangian L(dπ, µ) = f(dπ) + ∑m i=1 µigi(dπ) = max ν (ν · dπ − f∗(ν)) + ∑m i=1 µi max vi (dπvi − g∗i (vi)) . over dual variables µ ≥ 0, with new variables vi and ν. At first glance this does not look convexconcave, however we can introduce new variables ζi = µivi to obtain L(dπ, µ, ν, ζ1, . . . , ζm) = ν · dπ − f∗(ν) + ∑m i=1 (dπζi − µig∗i (ζi/µi)) . (7) This is convex (indeed affine) in dπ and concave in (ν, µ, ζ1, . . . , ζm), since it includes the perspective transform of the functions gi [13]. The Lagrangian involves a cost vector, ν + ∑m i=1 ζi, linearly interacting with dπ, and therefore we can use the same policy players as before to minimize this cost. For the cost player, it is possible to use OMD on Eq. (7) jointly for the variables ν, µ and ζ. It is more challenging to use best-response and FTL for the cost-player variables as the maximum value of the Lagrangian is unbounded for some values of dπ. Another option is to treat the problem as a three-player game. In this case the policy player controls dπ as before, one cost player chooses (ν, ζ1, . . . , ζm) and can use the algorithms we have previously discussed, and the other cost player chooses µ with some restrictions on their choice of algorithm. Analyzing the regret in that case is outside the scope of this paper. 6 Examples In this section we explain how existing algorithms can be seen as instances of the meta-algorithm for various choices of the objective function f and the cost and policy player algorithms Algλ and Algπ . We summarized the relationships in Table 1. 6.1 Apprenticeship Learning In apprenticeship learning (AL), we have an MDP without an explicit reward function. Instead, an expert provides demonstrations which are used to estimate the expert state occupancy measure dE . Abbeel and Ng [1] formalized the AL problem as finding a policy π whose state occupancy is close to that of the expert by minimizing the convex function f(dπ) = ||dπ − dE ||. The convex conjugate of f is given by f∗(y) = y · dE if ||y||∗ ≤ 1 and∞ otherwise, where || · ||∗ denotes the dual norm. Plugging f∗ into Eq. (4) results in the following game: min dπ∈K ||dπ − dE || = min dπ∈K max ||λ||∗≤1 λ · dπ − λ · dE . (8) Inspecting Eq. (8), we can see that the norm in the function f that is used to measure the distance from the expert induces a constraint set for the cost variable, which is a unit ball in the dual norm. Algλ=OMD, Algπ=Best Response/RL. The Multiplicative Weights AL algorithms [65, MWAL] was proposed to solve the AL problem with f(dπ) = ||dπ − dE ||∞. It uses the best response as the policy player and multiplicative weights as the cost player (a special case of OMD). MWAL has also been used to solve AL in contextual MDPs [10] and to find feasible solutions to convex-constrained MDPs [44]. We note that in practice the best response can only be solved approximately, as we discussed in Section 4. Instead, in online AL [61] the authors proposed to use MDPO as the policy player, which guarantees a regret bound of R̄K ≤ c/ √ K. They showed that their algorithm is equivalent to Wasserstein GAIL [73, 78] and in practice tends to perform similarly to GAIL. Algλ=FTL, Algπ=Best Response. When the policy player plays the best response and the cost player plays FTL, Algorithm 1 is equivalent to the Frank-Wolfe algorithm [22, 3] for minimizing f (Eq. (2)). Pseudo-code for this is included in the appendix (Algorithm 3). The algorithm finds a point dkπ ∈ K that has the largest inner-product (best response) with the negative gradient (i.e., FTL). Abbeel and Ng [1] proposed two algorithms for AL, the projection algorithm and the max margin algorithm. The projection algorithm is essentially a FW algorithm, as was suggested in the supplementary [1] and was later shown formally in [75]. Thus, it is a projection free algorithm in the sense that it avoids projecting dπ into K, despite the name. In their case the gradient is given by ∇f (dπ) = dπ − dE . Thus, finding the best response is equivalent to solving an MDP whose reward is dE − dπ . In a similar fashion, FW can be used to solve convex MDPs more generally [30]. Specifically, in [30], the authors considered the problem of pure exploration, which they defined as finding a policy that maximizes entropy. Fully Corrective FW. The FW algorithm has many variants (see [33] for a survey) some of which enjoy faster rates of convergence in special cases. Concretely, when the constraint set is a polytope, which is the case for convex MDPs (Definition 1), some variants achieve a linear rate of convergence [34, 75]. One such variant is the Fully corrective FW, which replaces the learning rate update (see line 4 of Algorithm 3 in the supplementary), with a minimization problem over the convex hull of occupancy measures at the previous time-step. This is guaranteed to be at least as good as the learning rate update. Interestingly, the second algorithm of Abbeel and Ng [1], the max margin algorithm, is exactly equivalent to this fully corrective FW variant. This implies that the max-margin algorithm enjoys a better theoretical convergence rate than the ‘projection’ variant, as was observed empirically in [1]. 6.2 GAIL and DIAYN: Algλ=FTL, Algπ=RL We now discuss the objectives of two popular algorithms, GAIL [31] and DIAYN [20], which perform AL and diverse skill discovery respectively. Our analysis suggests that GAIL and DIAYN share the same objective function. In GAIL, this objective function is minimized, which is a convex MDP, however, in DIAYN it is maximized, which is therefore not a convex MDP. We start the discussion with DIAYN and follow with a simple construction showing the equivalence to GAIL. DIAYN. Discriminative approaches [26, 20] rely on the intuition that skills are diverse when they are entropic and easily discriminated by observing the states that they visit. Given a probability space (Ω,F ,P), state random variables S : Ω→ S and latent skills Z : Ω→ Z with prior p, the key term of interest being maximized in DIAYN [20] is the mutual information: I(S;Z) = Ez∼p;s∼dzπ [log p(z|s)− log p(z)], (9) where dzπ is the stationary distribution induced by the policy π(a | s, z). For each skill z, this corresponds to a standard RL problem with (conditional) policy π(a | s, z) and reward function r(s|z) = log p(z|s) − log p(z). The first term encourages the policy to visit states for which the underlying skill has high-probability under the posterior p(z | s), while the second term ensures a high entropy distribution over skills. In practice, the full DIAYN objective further regularizes the learnt policy by including entropy terms − log π(a | s, z). For large state spaces, p(z|s) is typically intractable and Eq. 9 is replaced with a variational lower-bound, where the true posterior is replaced with a learned discriminator qφ(z|s). Here, we focus on the simple setting where z is a categorical distribution over |Z| outcomes, yielding |Z| policies πz , and qφ is a classifier over these |Z| skills with parameters φ. We now show that a similar intrinsic reward can be derived using the framework of convex MDPs. We start by writing the true posterior as a function of the per-skill state occupancy dzπ = p(s | z), and using Bayes rules, p(z|s) = d z π(s)p(z)∑ k d k π(s)p(k) . Combing this with Eq. (9) yields: Ez∼p(z),s∼dzπ [log p(z|s)− p(z)] = ∑ z p(z) ∑ s dzπ(s) [ log ( dzπ(s)p(z)∑ k d k π(s)p(k) ) − log p(z) ] = ∑ z p(z)KL(dzπ|| ∑ k p(k)dkπ) = EzKL(dzπ||Ekdkπ), (10) where KL denotes the Kullback–Leibler divergence [39]. Intuitively, finding a set of policies π1, . . . , πz that minimize Eq. (10) will result in finding policies that visit similar states, measured using the KL distance between their respective state occupancies d1π, . . . , d z π. This is a convex MDP because the KL-divergence is jointly convex in both arguments [13, Example 3.19]. We will soon show that this is the objective of GAIL. On the other hand, a set of policies that maximize Eq. (10) is diverse, as the policies visit different states, measured using the KL distance between their respective state occupancies d1π, . . . , d z π . We follow on with deriving the FTL player for the convex MDP in Eq. (10). We will then show that this FTL player is producing an intrinsic reward that is equivalent to the intrinsic reward used in GAIL and DIAYN (despite the fact that DIAYN is not a convex MDP). According to Eq. (6), the FTL cost player will produce a cost λk at iteration k given by ∇dzπKL(d z π|| ∑ k p(k)dkπ) = E z∼p(z) [ log dzπ∑ k d k πp(k) + 1− d z πp(z)∑ k d k πp(k) ] = E z∼p(z) [ log(p(z|s))− log(p(z))︸ ︷︷ ︸ Mutual Information +1− p(z|s)︸ ︷︷ ︸ Gradient correction ] , (11) where the equality follows from writing the posterior as a function of the per-skill state occupancy dzπ = p(s | z), and using Bayes rules, p(z|s) = dzπ(s)p(z)∑ k d k π(s)p(k) . Replacing the posterior p(z|s) with a learnt discriminator qφ(z|s) recovers the mutual-information rewards of DIAYN, with additional terms 1− p(z | s) which we refer to as “gradient correction” terms. Inspecting the common scenario of a uniform prior over the latent variables, p(z) = 1/|Z|, we get that the expectation of the gradient correction term ∑ z p(z)(1− p(z|s)) = 1− 1/|Z| in each state. From the perspective of the policy player, adding a constant to the reward does not change the best response policy, nor the optimistic policy. Therefore, the gradient correction term does not have an effect on the optimization under a uniform prior, and we retrieved the reward of DIAYN. These algorithms differ however for more general priors p(z), which we explore empirically in Appendix F. GAIL. We further show how Eq. (10) extends to GAIL [31] via a simple construction. Consider a binary latent space of size |Z| = 2, where z = 1 corresponds to the policy of the agent and z = 2 corresponds to the policy of the expert which is fixed. In addition, consider a uniform prior over the latent variables, i.e., p(z = 1) = 12 . By removing the constant terms in Eq. (11), one retrieves the GAIL [31] algorithm. The cost log(p(z|s)) is the probability of the discriminator to identify the agent, and the policy player is MDPO (which is similar to TRPO in GAIL). 7 Discussion In this work we reformulated the convex MDP problem as a convex-concave game between the agent and another player that is producing costs (negative rewards) and proposed a meta-algorithm for solving it. We observed that many algorithms in the literature can be interpreted as instances of the metaalgorithm by selecting different pairs of subroutines employed by the policy and cost players. The Frank-Wolfe algorithm, which combines best response with FTL, was originally proposed for AL [1, 75] but can be used for any convex MDP problem as was suggested in [30]. Zhang et al. [77], unified the problems of RL, AL, constrained MDPs with linear constraints and maximum entropy exploration under the framework of convex MDPs. We extended the framework to allow convex constraints (Section 5) and explained the objective of GAIL as a convex MDP (Section 6.2). We also discussed non convex objectives (Section 3) and analyzed unsupervised skill discovery via the maximization of mutual information (Section 6.2) as a special case. Finally, we would like to point out a recent work by Geist et al. [25], which was published concurrently to ours, and studies the convex MDP problem from the viewpoint of mean field games. There are also algorithms for convex MDPs that cannot be explained as instances of Algorithm 1. In particular, Zhang et al. [77] proposed a policy gradient algorithm for convex MDPs in which each step of policy gradient involves solving a new saddle point problem (formulated using the Fenchel dual). This is different from our approach since we solve a single saddle point problem iteratively, and furthermore we have much more flexibility about which algorithms the policy player can use. Moreover, for the convergence guarantee [77, Theorem 4.5] to hold, the saddle point problem has to be solved exactly, while in practice it is only solved approximately [77, Algorithm 1], which hinders its sample efficiency. Fenchel duality has also been used in off policy evaluation (OPE) in [46, 74]. The difference between these works and ours is that we train a policy to minimize an objective, while in OPE a target policy is fixed and its value is estimated from data produced by a behaviour policy. In order to solve a practical convex MDP problem in a given domain it would be prudent to use an RL algorithm that is known to be high performing for the vanilla RL problem as the policy player. From the theoretical point of view this could be MDPO or UCRL2, which we have shown come with strong guarantees. From the practical point of view using a high performing DRL algorithm, which may be specific to the domain, will usually yield the best results. For the cost player using FTL, i.e., using the gradient of the objective function, is typically the best choice. Acknowledgments and Disclosure of Funding We would like to thank Yasin Abbasi-Yadkorie, Vlad Mnih, Jacob Abernethy, Lior Shani and Doina Precup for their comments and discussion on this work. Work done at DeepMind, the authors received no specific funding for this work.
1. What is the focus of the paper in regards to reinforcement learning? 2. What is the main contribution of the paper, and how does it relate to previous works in the field? 3. What is the significance of the paper, and who might be interested in its content? 4. Are there any concerns or suggestions regarding the paper's organization or presentation?
Summary Of The Paper Review
Summary Of The Paper The paper studies the convex RL problem, which is defined as finding policy π minimizing f ( d π ) where f is a convex function and d π is the occupancy measure of policy π . Using Fenchel duality, this formulation can be written as a min-max and be solved using the well-known framework of repeated game playing first introduced by Freund and Schapire (1999). The authors show that their framework unifies some well-known approaches in RL by appropriately choosing the no-regret algorithm for the policy and cost player. Review Originality and Quality The proof techniques and theoretical components of the paper already exist in the prior work. However, the paper's main contribution is providing a unifying framework that captures previous approaches through a single lens of convex RLs. Although the materials already exist in the prior work, they are scattered, and the authors did a great job of bringing all of them under a single framework. I believe the paper's main contribution is the framework itself and its connections that have been well discussed. Therefore, I would suggest moving the experiments to the appendix as they are not adding to the paper's contribution. The proofs and claims are sound and well-supported. Clarity I have much enjoyed reading the paper as it's very well-written and well-organized. Significance The paper will be of interest to many researchers in the RL community.
NIPS
Title Reward is enough for convex MDPs Abstract Maximising a cumulative reward function that is Markov and stationary, i.e., defined over state-action pairs and independent of time, is sufficient to capture many kinds of goals in a Markov decision process (MDP). However, not all goals can be captured in this manner. In this paper we study convex MDPs in which goals are expressed as convex functions of the stationary distribution and show that they cannot be formulated using stationary reward functions. Convex MDPs generalize the standard reinforcement learning (RL) problem formulation to a larger framework that includes many supervised and unsupervised RL problems, such as apprenticeship learning, constrained MDPs, and so-called ‘pure exploration’. Our approach is to reformulate the convex MDP problem as a min-max game involving policy and cost (negative reward) ‘players’, using Fenchel duality. We propose a meta-algorithm for solving this problem and show that it unifies many existing algorithms in the literature. 1 Introduction In reinforcement learning (RL), an agent learns how to map situations to actions so as to maximize a cumulative scalar reward signal. The learner is not told which actions to take, but instead must discover which actions lead to the most reward [64]. Mathematically, the RL problem can be written as finding a policy whose state occupancy has the largest inner product with a reward vector [55], i.e., the goal of the agent is to solve RL: max dπ∈K ∑ s,a r(s, a)dπ(s, a), (1) where dπ is the state-action stationary distribution induced by policy π and K is the set of admissible stationary distributions (see Definition 1). A significant body of work is dedicated to solving the RL problem efficiently in challenging domains [45, 62]. However, not all decision making problems of interest take this form. In particular we consider the more general convex MDP problem, Convex MDP: min dπ∈K f(dπ), (2) where f : K → R is a convex function. Sequential decision making problems that take this form include Apprenticeship Learning (AL), pure exploration, and constrained MDPs, among others; see Table 1. In this paper we prove the following claim: We can solve Eq. (2) by using any algorithm that solves Eq. (1) as a subroutine. In other words, any algorithm that solves the standard RL problem can be used to solve the more general convex MDP problem. More specifically, we make the following contributions. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Firstly, we adapt the meta-algorithm of Abernethy and Wang [3] for solving Eq. (2). The key idea is to use Fenchel duality to convert the convex MDP problem into a two-player zero-sum game between the agent (henceforth, policy player) and an adversary that produces rewards (henceforth, cost player) that the agent must maximize [3, 6]. From the agent’s point of view, the game is bilinear, and so for fixed rewards produced by the adversary the problem reduces to the standard RL problem with non-stationary reward (Fig. 1). Secondly, we propose a sample efficient policy player that uses a standard RL algorithm (eg, [35, 60]), and computes an optimistic policy with respect to the non-stationary reward at each iteration. In other words, we use algorithms that were developed to achieve low regret in the standard RL setup, to achieve low regret as policy players in the min-max game we formulate to solve the convex MDP. Our main result is that the average of the policies produced by the policy player converges to a solution to the convex MDP problem (Eq. (2)). Inspired by this principle, we also propose a recipe for using deep-RL (DRL) agents to solve convex MDPs heuristically: provide the agent non-stationary rewards from the cost player. We explore this principle in our experiments. Finally, we show that choosing specific algorithms for the policy and cost players unifies several disparate branches of RL problems, such as apprenticeship learning, constrained MDPs, and pure exploration into a single framework, as we summarize in Table 1. 2 Reinforcement Learning Preliminaries In RL an agent interacts with an environment over a number of time steps and seeks to maximize its cumulative reward. We consider two cases, the average reward case and the discounted case. The Markov decision process (MDP) is defined by the tuple (S,A, P,R) for the average reward case and by the tuple (S,A, P,R, γ, d0) for the discounted case. We assume an infinite horizon, finite state-action problem where initially, the state of the agent is sampled according to s0 ∼ d0, then at each time t the agent is in state st ∈ S, selects action at ∈ A according to some policy π(st, ·), receives reward rt ∼ R(st, at) and transitions to new state st+1 ∈ S according to the probability distribution P (·, st, at). The two performance metrics we consider are given by Javgπ = lim T→∞ 1 T E T∑ t=1 rt, J γ π = (1− γ)E ∞∑ t=1 γtrt, (3) for the average reward case and discounted case respectively. The goal of the agent is to find a policy that maximizes Javgπ or J γ π . Any stationary policy π induces a state-action occupancy measure dπ, which measures how often the agent visits each state-action when following π. Let Pπ(st = ·) be the probability measure over states at time t under policy π, then davgπ (s, a) = lim T→∞ 1 T E T∑ t=1 Pπ(st = s)π(s, a), dγπ(s, a) = (1− γ)E ∞∑ t=1 γtPπ(st = s)π(s, a), for the average reward case and the discounted case respectively. With these, we can rewrite the RL objective in Eq. (3) in terms of the occupancy measure using the following well-known result, which for completeness we prove in Appendix B. Proposition 1. For both the average and the discounted case, the agent objective function Eq. (3) can be written in terms of the occupancy measure as Jπ = ∑ s,a r(s, a)dπ(s, a). Given an occupancy measure it is possible to recover the policy by setting π(s, a) = dπ(s, a)/ ∑ a dπ(s, a) if ∑ a dπ(s, a) > 0, and π(s, a) = 1/|A| otherwise. Accordingly, in this paper we shall formulate the RL problem using the state-action occupancy measure, and both the standard RL problem (Eq. (1)) and the convex MDP problem (Eq. (2)) are convex optimization problems in variable dπ. For the purposes of this manuscript we do not make a distinction between the average and discounted settings, other than through the convex polytopes of feasible occupancy measures, which we define next. Definition 1 (State-action occupancy’s polytope [55]). For the average reward case the set of admissible state-action occupancies is Kavg = {dπ | dπ ≥ 0, ∑ s,a dπ(s, a) = 1, ∑ a dπ(s, a) = ∑ s′,a′ P (s, s′, a′)dπ(s ′, a′) ∀s ∈ S}, and for the discounted case it is given by Kγ = {dπ | dπ ≥ 0, ∑ a dπ(s, a) = (1− γ)d0(s) + γ ∑ s′,a′ P (s, s′, a′)dπ(s ′, a′) ∀s ∈ S}. We note that being a polytope implies that K is a convex and compact set. The convex MDP problem is defined for the tuple (S,A, P, f) in the average cost case and (S,A, P, f, γ, d0) in the discounted case. This tuple is defining a state-action occupancy’s polytope K (Definition 1), and the problem is to find a policy π whose state occupancy dπ is in this polytope and minimizes the function f (Eq. (2)). 3 A Meta-Algorithm for Solving Convex MDPs via RL To solve the convex MDP problem (Eq. (2)) we need to find an occupancy measure dπ (and associated policy) that minimizes the function f . Since both f : K → R and the set K are convex this is a convex optimization problem. However, it is a challenging one due to the nature of learning about the environment through stochastic interactions. In this section we show how to reformulate the convex MDP problem (Eq. (2)) so that standard RL algorithms can be used to solve it, allowing us to harness decades of work on solving vanilla RL problems. To do that we will need the following definition. Definition 2 (Fenchel conjugate). For a function f : Rn → R ∪ {−∞,∞}, its Fenchel conjugate is denoted f∗ : Rn → R ∪ {−∞,∞} and defined as f∗(x) := supy x · y − f(y). Remark 1. The Fenchel conjugate function f∗ is always convex (when it exists) even if f is not. Furthermore, the biconjugate f∗∗ := (f∗)∗ equals f if and only if f is convex and lower semicontinuous. Using this we can rewrite the convex MDP problem (Eq. (2)) as fOPT = min dπ∈K f(dπ) = min dπ∈K max λ∈Λ (λ · dπ − f∗(λ)) = max λ∈Λ min dπ∈K (λ · dπ − f∗(λ)) (4) where Λ is the closure of (sub-)gradient space {∂f(dπ)|dπ ∈ K}, which is a convex set [3, Theorem 4]. As both sets are convex, this is a convex-concave saddle-point problem and a zero-sum two-player game [54, 49], and we were able to swap the order of minimization and maximization using the minimax theorem [71]. With this we define the Lagrangian as L(dπ, λ) := λ · dπ − f∗(λ). For a fixed λ ∈ Λ, minimizing the Lagrangian is a standard RL problem of the form of Eq. (1), i.e., equivalent to maximizing a reward r = −λ. Thus, one might hope that by producing an optimal dual variable λ? we could simply solve d?π = argmindπ∈K L(·, λ ?) for the optimal occupancy measure. However, the next lemma states that this is not possible in general. Lemma 1. There exists an MDP M and convex function f for which there is no stationary reward r ∈ RS×A such that arg maxdπ∈K dπ · r = arg mindπ∈K f(dπ). To see this note that for any reward r there is a deterministic policy that optimizes the reward [55], but for some choices of f no deterministic policy is optimal, eg, when f is the negative entropy function. This result tells us that even if we have access to an optimal dual-variable we cannot simply use it to recover the stationary distribution that solves the convex MDP problem in general. To overcome this issue we develop an algorithm that generates a sequence of policies {πk}k∈N such that the average converges to an optimal policy for Eq. (2), i.e., (1/K) ∑K k=1 d k π → d?π ∈ arg mindπ∈K f(dπ). The algorithm we develop is described in Algorithm 1 and is adapted from the meta-algorithm described in Abernethy and Wang [3]. It is referred to as a meta-algorithm since it relies on supplied sub-routine algorithms Algπ and Algλ. The reinforcement learning algorithm Algπ takes as input a reward vector and returns a state-action occupancy measure dπ. The cost algorithm Algλ can be a more general function of the entire history. We discuss concrete examples of Algπ and Algλ in Section 4. Algorithm 1: meta-algorithm for convex MDPs 1: Input: convex-concave payoff L : K × Λ→ R, algorithms Algλ,Algπ , K ∈ N 2: for k = 1, . . . ,K do 3: λk = Algλ(d 1 π, . . . , d k−1 π ;L) 4: dkπ = Algπ(−λk) 5: end for 6: Return d̄Kπ = 1 K ∑K k=1 d k π, λ̄ K = 1K ∑K k=1 λ k In order to analyze this algorithm we will need a small detour into online convex optimization (OCO). In OCO, a learner is presented with a sequence of K convex loss functions `1, `2, . . . , `K : K → R and at each round k must select a point xk ∈ K after which it suffers a loss of `k(xk). At time period k the learner is assumed to have perfect knowledge of the loss functions `1, . . . , `k−1. The learner wants to minimize its average regret, defined as R̄K := 1 K ( K∑ k=1 `k(xk)−min x∈K K∑ k=1 `k(x) ) . In the context of convex reinforcement learning and meta-algorithm 1, the loss functions for the cost player are `kλ = −L(·, λk), and for the policy player are `kπ = L(dkπ, ·), with associated average regrets R̄πK and R̄ λ K . This brings us to the following theorem. Theorem 1 (Theorem 2, [3]). Assume that Algπ and Algλ have guaranteed average regret bounded as R̄πK ≤ K and R̄λK ≤ δK , respectively. Then Algorithm 1 outputs d̄Kπ and λ̄K satisfying mindπ∈K L(dπ, λ̄K) ≥ fOPT − K − δK and maxλ∈Λ L(d̄Kπ , λ) ≤ fOPT + K + δK . This theorem tells us that so long as the RL algorithm we employ has guaranteed low-regret, and assuming we choose a reasonable low-regret algorithm for deciding the costs, then the meta-algorithm will produce a solution to the convex MDP problem (Eq. (2)) to any desired tolerance, this is because fOPT ≤ f(d̄Kπ ) = maxλ L(d̄Kπ , λ) ≤ fOPT + K +δK . For example, we shall later present algorithms that have regret bounded as K = δK ≤ O(1/ √ K), in which case we have f(d̄Kπ )− fOPT ≤ O(1/ √ K). (5) Non-Convex f . Remark 1 implies that the game maxλ∈Λ mindπ∈K (λ · dπ − f∗(λ)) is concaveconvex for any function f , so we can solve it with Algorithm 1, even for a non-convex f . From weak duality the value of the Lagrangian on the output of Algorithm 1, L(d̄π, λ̄), is a lower bound on the optimal solution fOPT. In addition, since f(dπ) is always an upper bound on fOPT we have both an upper bound and a lower bound on the optimal value: L(d̄π, λ̄) ≤ fOPT ≤ f(d̄π). 4 Policy and Cost Players for Convex MDPs In this section we present several algorithms for the policy and cost players that can be used in Algorithm 1. Any combination of these algorithms is valid and will come with different practical and theoretical performance. In Section 6 we show that several well known methods in the literature correspond to particular choices of cost and policy players and so fall under our framework. In addition, in this section we assume that λmax = max λ∈Λ max s,a |λ(s, a)| <∞, which holds when the set Λ is compact. One way to guarantee that Λ is compact is to consider functions f with Lipschitz continuous gradients (which implies bounded gradients since the set K is compact). For simplicity, we further assume that λmax ≤ 1. By making this assumption we assure that the non stationary rewards produced by the cost player are bounded by 1 as is usually done in RL. 4.1 Cost Player Follow the Leader (FTL) is a classic OCO algorithm that selects λk to be the best point in hindsight. In the special case of convex MDPs, as defined in Eq. (4), FTL has a simpler form: λk = arg max λ∈Λ ∑k−1 j=1 L(djπ, λ) = arg max λ∈Λ ( λ · ∑k−1 j=1 djπ −Kf∗(λ) ) = ∇f(d̄k−1π ), (6) where d̄k−1π = ∑k−1 j=1 d j π and the last equality follows from the fact that (∇f∗)−1 = ∇f [56]. The average regret of FTL is guaranteed to be R̄K ≤ c/ √ K under some assumptions [29]. In some cases, and specifically when the set K is a polytope and the function f is strongly convex, FTL can enjoy logarithmic or even constant regret; see [32, 29] for more details. Online Mirror Descent (OMD) uses the following update [47, 9]: λk = arg max λ∈Λ ( (λ− λk−1) · ∇λL(dk−1π , λk−1) + αkBr(λ, λk−1) ) , where αk is a learning rate and Br is a Bregman divergence [14]. For Br(x) = 0.5||x||22, we get online gradient descent [79] and for Br(x) = x · log(x) we get multiplicative weights [23] as special cases. We also note that OMD is equivalent to a linearized version of Follow the Regularized Leader (FTRL) [43, 28]. The average regret of OMD is R̄K ≤ c/ √ K under some assumptions, see, for example [28]. 4.2 Policy Players 4.2.1 Best Response In OCO, the best response is to simply ignore the history and play the best option on the current round, which has guaranteed average regret bound of R̄K ≤ 0 (this requires knowledge of the current loss function, which is usually not applicable but is in this case). When applied to Eq. (4), it is possible to find the best response dkπ using standard RL techniques since dkπ = arg min dπ∈K Lk(dπ, λk) = arg min dπ∈K dπ · λk − f∗(λk) = arg max dπ∈K dπ · (−λk), which is an RL problem for maximizing the reward (−λk). In principle, any RL algorithm that eventually solves the RL problem can be used to find the best response, which substantiates our claim in the introduction. For example, tabular Q-learning executed for sufficiently long and with a suitable exploration strategy will converge to the optimal policy [72]. In the non-tabular case we could parameterize a deep neural network to represent the Q-values [45] and if the network has sufficient capacity then similar guarantees might hold. We make no claims on efficiency or tractability of this approach, just that in principle such an approach would provide the best-response at each iteration and therefore satisfy the required conditions to solve the convex MDP problem. 4.2.2 Approximate Best Response The caveat in using the best response as a policy player is that in practice, it can only be found approximately by executing an RL algorithm in the environment. This leads to defining an approximate best response via the Probably Approximately Correct (PAC) framework. We say that a policy player is PAC( , δ), if it finds an -optimal policy to an RL problem with probability of at least 1 − δ. In addition, we say that a policy π′ is -optimal if its state occupancy d′π is such that max dπ∈K dπ · (−λk)− d′π · (−λk) ≤ . For example, the algorithm in [40] can find an -optimal policy to the discounted RL problem after seeing O ( SA (1−γ)3 2 log( 1 δ ) ) samples; and the algorithm in [36] can find an -optimal policy for the average reward RL problem after seeing O ( t2mixSA 2 log( 1 δ ) ) samples, where tmix is the mixing time (see, eg, [42, 76] for a formal definition). The following Lemma analyzes the sample complexity of Algorithm 1 with an approximate best response policy player for the average reward RL problem [36]. The result can be easily extended to the discounted case using the algorithm in [40]. Other relaxations to the best response for specific algorithms can be found in [65, 44, 33, 30]. Lemma 2 (The sample complexity of approximate best response in convex MDPs with average occupancy measure). For a convex function f , running Algorithm 1 with an oracle cost player with regret R̄λK = O(1/K) and an approximate best response policy player that solves the average reward RL problem in iteration k to accuracy k = 1/k returns an occupancy measure d̄Kπ that satisfies f(d̄Kπ )− fOPT ≤ with probability 1− δ after seeing O(t2mixSA log(2K/ δ)/ 3δ3) samples. Similarly, for R̄λK = O(1/ √ K), setting k = 1/ √ k requires O(t2mixSA log(2K/ δ)/ 4δ4) samples. 4.2.3 Non-Stationary RL Algorithms We now discuss a different type of policy players; instead of solving an MDP to accuracy , these algorithms perform a single RL update to the policy, with cost −λk. In our setup the reward is known and deterministic but non-stationary, while in the standard RL setup it is unknown, stochastic, and stationary. We conjecture that any RL algorithm can be adapted to the known non-stationary reward setup we consider here. In most cases both Bayesian [51, 48] and frequentist [8, 35] approaches to the stochastic RL problem solve a modified (eg, by adding optimism) Bellman equation at each time period and swapping in a known but non-stationary reward is unlikely to present a problem. To support this conjecture we shall prove that this is exactly the case for UCRL2 [35]. UCRL2 is an RL algorithm that was designed and analyzed in the standard RL setup, and we shall show that it is easily adapted to the non-stationary but known reward setup that we require. To make this claim more general, we will also discuss a similar result for the MDPO algorithm [61] that was given in a slightly different setup. UCRL2 is a model based algorithm that maintains an estimate of the reward and the transition function as well as confidence sets about those estimates. In our case the reward at time k is known, so we only need to consider uncertainty in the dynamics. UCRL2 guarantees that in any iteration k, the true transition function is in a confidence set with high probability, i.e., P ∈ Pk for confidence set Pk. If we denote by JP,Rπ the value of policy π in an MDP with dynamics P and reward R then the optimistic policy is π̃k = arg maxπ maxP ′∈Pk J P ′,−λk π . Acting according to this policy is guaranteed to attain low regret. In the following results for UCRL2 we will use the constant D, which denotes the diameter of the MDP, see [35, Definition 1] for more details. In the supplementary material (Appendix E), we provide a proof sketch that closely follows [35]. Lemma 3 (Non stationary regret of UCRL2). For an MDP with dynamics P, diameter D, an arbitrary sequence of known and bounded rewards { ri : maxs,a |ri(s, a)| ≤ 1 }K i=1 , such that the optimal average reward at episode k, with respect to P and rk is J?k , then with probability at least 1−δ, the average regret of UCRL2 is at most R̄K = 1K ∑K k=1 J ? k−J π̃k k ≤ O(DS √ A log(K/δ)/K). Next, we give a PAC( , δ) sample complexity result for the mixed policy π̄K , that is produced by running Algorithm 1 with UCRL2 as a policy player. Lemma 4 (The sample complexity of non-stationary RL algorithms in convex MDPs). For a convex function f, running Algorithm 1 with an oracle cost player with regret R̄λK ≤ c0/ √ K and UCRL2 as a policy player returns an occupancy measure d̄Kπ that satisfies f(d̄ K π )− fOPT ≤ with probability 1− δ after K = O ( D2S2A δ2 2 log( 2DSA δ ) ) steps. MDPO. Another optimistic algorithm is Mirror Descent Policy Optimization [60, MDPO]. MDPO is a model free RL algorithm that is very similar to popular DRL algorithms like TRPO [58] and MPO [2]. In [24, 59, 5], the authors established the global convergence of MDPO and in [15, 60], the authors showed that MDPO with optimistic exploration enjoys low regret. The analysis for MDPO is given in a finite horizon MDP with horizon H , which is not the focus of our paper. Nevertheless, to support our conjecture that any stochastic RL algorithm can be adapted to the known non-stationary reward setup, we quickly discuss the regret of MDPO in this setup. We also note that MDPO is closer to practical DRL algorithms [70]. In a finite horizon MDP with horizon H and known, non-stationary and bounded rewards, the regret of MDPO is bounded by R̄K ≤ O(H2S √ A/K) [61, Lemma 4] with high probability. To compare this result with UCRL2, we refer to a result from [57], which analyzed UCRL2 in the adversarial setup, that includes our setup as a special case. In a finite horizon MDP with horizon H it was shown that setting δ = SA/K with probability 1 − δ its regret is bounded by R̄K ≤ O(HS √ A log(K)/K) [57, Corollary 5], which is better by a factor of H than MDPO. Discussion. Comparing the results in Lemma 4 with Lemma 2 suggests that using an RL algorithm with non stationary reward as a policy player requires O(1/ 2) samples to find an −optimal policy, while using an approximate best response requires O(1/ 3). In first glance, this results also improves the previously best known result of Hazan et al. [30] for approximate Frank-Wolfe (FW) that requires O(1/ 3) samples. However, there are more details that have to be considered as we now discuss. Firstly, Lemma 4 and Lemma 2 assume access to an oracle cost player with some regret and do not consider how to implement such a cost player. The main challenge is that the cost player does not have access to the true state occupancy and must estimate it from samples. If we do not reuse samples from previous policies to estimate the state occupancy of the current policy we will require O(1/ 3) trajectories overall [30]. A better approach would use the samples from previous episodes to learn the transition function. Then, given the estimated transition function and the policy, we can compute an approximation of the state occupancy. We conjecture that such an approach would lead to a O(1/ 2) sample complexity, closing the gap with standard RL. Secondly, while our focus is on the dependence in , our bound Lemma 4 is not tight in δ, i.e., it scales with 1/δ2 where it should be possible to achieve a log(1/δ) scaling. Again we conjecture an improvement in the bound is possible; see, eg, [38, Appendix F.]. 5 Convex Constraints We have restricted the presentation so far to unconstrained convex problems, in this section we extend the above results to the constrained case. The problem we consider is min dπ∈K f(dπ) subject to gi(dπ) ≤ 0, i = 1, . . .m, where f and the constraint functions gi are convex. Previous work focused on the case where both f and gi are linear [7, 67, 12, 68, 18, 16, 11]. We can use the same Fenchel dual machinery we developed before, but now taking into account the constraints. Consider the Lagrangian L(dπ, µ) = f(dπ) + ∑m i=1 µigi(dπ) = max ν (ν · dπ − f∗(ν)) + ∑m i=1 µi max vi (dπvi − g∗i (vi)) . over dual variables µ ≥ 0, with new variables vi and ν. At first glance this does not look convexconcave, however we can introduce new variables ζi = µivi to obtain L(dπ, µ, ν, ζ1, . . . , ζm) = ν · dπ − f∗(ν) + ∑m i=1 (dπζi − µig∗i (ζi/µi)) . (7) This is convex (indeed affine) in dπ and concave in (ν, µ, ζ1, . . . , ζm), since it includes the perspective transform of the functions gi [13]. The Lagrangian involves a cost vector, ν + ∑m i=1 ζi, linearly interacting with dπ, and therefore we can use the same policy players as before to minimize this cost. For the cost player, it is possible to use OMD on Eq. (7) jointly for the variables ν, µ and ζ. It is more challenging to use best-response and FTL for the cost-player variables as the maximum value of the Lagrangian is unbounded for some values of dπ. Another option is to treat the problem as a three-player game. In this case the policy player controls dπ as before, one cost player chooses (ν, ζ1, . . . , ζm) and can use the algorithms we have previously discussed, and the other cost player chooses µ with some restrictions on their choice of algorithm. Analyzing the regret in that case is outside the scope of this paper. 6 Examples In this section we explain how existing algorithms can be seen as instances of the meta-algorithm for various choices of the objective function f and the cost and policy player algorithms Algλ and Algπ . We summarized the relationships in Table 1. 6.1 Apprenticeship Learning In apprenticeship learning (AL), we have an MDP without an explicit reward function. Instead, an expert provides demonstrations which are used to estimate the expert state occupancy measure dE . Abbeel and Ng [1] formalized the AL problem as finding a policy π whose state occupancy is close to that of the expert by minimizing the convex function f(dπ) = ||dπ − dE ||. The convex conjugate of f is given by f∗(y) = y · dE if ||y||∗ ≤ 1 and∞ otherwise, where || · ||∗ denotes the dual norm. Plugging f∗ into Eq. (4) results in the following game: min dπ∈K ||dπ − dE || = min dπ∈K max ||λ||∗≤1 λ · dπ − λ · dE . (8) Inspecting Eq. (8), we can see that the norm in the function f that is used to measure the distance from the expert induces a constraint set for the cost variable, which is a unit ball in the dual norm. Algλ=OMD, Algπ=Best Response/RL. The Multiplicative Weights AL algorithms [65, MWAL] was proposed to solve the AL problem with f(dπ) = ||dπ − dE ||∞. It uses the best response as the policy player and multiplicative weights as the cost player (a special case of OMD). MWAL has also been used to solve AL in contextual MDPs [10] and to find feasible solutions to convex-constrained MDPs [44]. We note that in practice the best response can only be solved approximately, as we discussed in Section 4. Instead, in online AL [61] the authors proposed to use MDPO as the policy player, which guarantees a regret bound of R̄K ≤ c/ √ K. They showed that their algorithm is equivalent to Wasserstein GAIL [73, 78] and in practice tends to perform similarly to GAIL. Algλ=FTL, Algπ=Best Response. When the policy player plays the best response and the cost player plays FTL, Algorithm 1 is equivalent to the Frank-Wolfe algorithm [22, 3] for minimizing f (Eq. (2)). Pseudo-code for this is included in the appendix (Algorithm 3). The algorithm finds a point dkπ ∈ K that has the largest inner-product (best response) with the negative gradient (i.e., FTL). Abbeel and Ng [1] proposed two algorithms for AL, the projection algorithm and the max margin algorithm. The projection algorithm is essentially a FW algorithm, as was suggested in the supplementary [1] and was later shown formally in [75]. Thus, it is a projection free algorithm in the sense that it avoids projecting dπ into K, despite the name. In their case the gradient is given by ∇f (dπ) = dπ − dE . Thus, finding the best response is equivalent to solving an MDP whose reward is dE − dπ . In a similar fashion, FW can be used to solve convex MDPs more generally [30]. Specifically, in [30], the authors considered the problem of pure exploration, which they defined as finding a policy that maximizes entropy. Fully Corrective FW. The FW algorithm has many variants (see [33] for a survey) some of which enjoy faster rates of convergence in special cases. Concretely, when the constraint set is a polytope, which is the case for convex MDPs (Definition 1), some variants achieve a linear rate of convergence [34, 75]. One such variant is the Fully corrective FW, which replaces the learning rate update (see line 4 of Algorithm 3 in the supplementary), with a minimization problem over the convex hull of occupancy measures at the previous time-step. This is guaranteed to be at least as good as the learning rate update. Interestingly, the second algorithm of Abbeel and Ng [1], the max margin algorithm, is exactly equivalent to this fully corrective FW variant. This implies that the max-margin algorithm enjoys a better theoretical convergence rate than the ‘projection’ variant, as was observed empirically in [1]. 6.2 GAIL and DIAYN: Algλ=FTL, Algπ=RL We now discuss the objectives of two popular algorithms, GAIL [31] and DIAYN [20], which perform AL and diverse skill discovery respectively. Our analysis suggests that GAIL and DIAYN share the same objective function. In GAIL, this objective function is minimized, which is a convex MDP, however, in DIAYN it is maximized, which is therefore not a convex MDP. We start the discussion with DIAYN and follow with a simple construction showing the equivalence to GAIL. DIAYN. Discriminative approaches [26, 20] rely on the intuition that skills are diverse when they are entropic and easily discriminated by observing the states that they visit. Given a probability space (Ω,F ,P), state random variables S : Ω→ S and latent skills Z : Ω→ Z with prior p, the key term of interest being maximized in DIAYN [20] is the mutual information: I(S;Z) = Ez∼p;s∼dzπ [log p(z|s)− log p(z)], (9) where dzπ is the stationary distribution induced by the policy π(a | s, z). For each skill z, this corresponds to a standard RL problem with (conditional) policy π(a | s, z) and reward function r(s|z) = log p(z|s) − log p(z). The first term encourages the policy to visit states for which the underlying skill has high-probability under the posterior p(z | s), while the second term ensures a high entropy distribution over skills. In practice, the full DIAYN objective further regularizes the learnt policy by including entropy terms − log π(a | s, z). For large state spaces, p(z|s) is typically intractable and Eq. 9 is replaced with a variational lower-bound, where the true posterior is replaced with a learned discriminator qφ(z|s). Here, we focus on the simple setting where z is a categorical distribution over |Z| outcomes, yielding |Z| policies πz , and qφ is a classifier over these |Z| skills with parameters φ. We now show that a similar intrinsic reward can be derived using the framework of convex MDPs. We start by writing the true posterior as a function of the per-skill state occupancy dzπ = p(s | z), and using Bayes rules, p(z|s) = d z π(s)p(z)∑ k d k π(s)p(k) . Combing this with Eq. (9) yields: Ez∼p(z),s∼dzπ [log p(z|s)− p(z)] = ∑ z p(z) ∑ s dzπ(s) [ log ( dzπ(s)p(z)∑ k d k π(s)p(k) ) − log p(z) ] = ∑ z p(z)KL(dzπ|| ∑ k p(k)dkπ) = EzKL(dzπ||Ekdkπ), (10) where KL denotes the Kullback–Leibler divergence [39]. Intuitively, finding a set of policies π1, . . . , πz that minimize Eq. (10) will result in finding policies that visit similar states, measured using the KL distance between their respective state occupancies d1π, . . . , d z π. This is a convex MDP because the KL-divergence is jointly convex in both arguments [13, Example 3.19]. We will soon show that this is the objective of GAIL. On the other hand, a set of policies that maximize Eq. (10) is diverse, as the policies visit different states, measured using the KL distance between their respective state occupancies d1π, . . . , d z π . We follow on with deriving the FTL player for the convex MDP in Eq. (10). We will then show that this FTL player is producing an intrinsic reward that is equivalent to the intrinsic reward used in GAIL and DIAYN (despite the fact that DIAYN is not a convex MDP). According to Eq. (6), the FTL cost player will produce a cost λk at iteration k given by ∇dzπKL(d z π|| ∑ k p(k)dkπ) = E z∼p(z) [ log dzπ∑ k d k πp(k) + 1− d z πp(z)∑ k d k πp(k) ] = E z∼p(z) [ log(p(z|s))− log(p(z))︸ ︷︷ ︸ Mutual Information +1− p(z|s)︸ ︷︷ ︸ Gradient correction ] , (11) where the equality follows from writing the posterior as a function of the per-skill state occupancy dzπ = p(s | z), and using Bayes rules, p(z|s) = dzπ(s)p(z)∑ k d k π(s)p(k) . Replacing the posterior p(z|s) with a learnt discriminator qφ(z|s) recovers the mutual-information rewards of DIAYN, with additional terms 1− p(z | s) which we refer to as “gradient correction” terms. Inspecting the common scenario of a uniform prior over the latent variables, p(z) = 1/|Z|, we get that the expectation of the gradient correction term ∑ z p(z)(1− p(z|s)) = 1− 1/|Z| in each state. From the perspective of the policy player, adding a constant to the reward does not change the best response policy, nor the optimistic policy. Therefore, the gradient correction term does not have an effect on the optimization under a uniform prior, and we retrieved the reward of DIAYN. These algorithms differ however for more general priors p(z), which we explore empirically in Appendix F. GAIL. We further show how Eq. (10) extends to GAIL [31] via a simple construction. Consider a binary latent space of size |Z| = 2, where z = 1 corresponds to the policy of the agent and z = 2 corresponds to the policy of the expert which is fixed. In addition, consider a uniform prior over the latent variables, i.e., p(z = 1) = 12 . By removing the constant terms in Eq. (11), one retrieves the GAIL [31] algorithm. The cost log(p(z|s)) is the probability of the discriminator to identify the agent, and the policy player is MDPO (which is similar to TRPO in GAIL). 7 Discussion In this work we reformulated the convex MDP problem as a convex-concave game between the agent and another player that is producing costs (negative rewards) and proposed a meta-algorithm for solving it. We observed that many algorithms in the literature can be interpreted as instances of the metaalgorithm by selecting different pairs of subroutines employed by the policy and cost players. The Frank-Wolfe algorithm, which combines best response with FTL, was originally proposed for AL [1, 75] but can be used for any convex MDP problem as was suggested in [30]. Zhang et al. [77], unified the problems of RL, AL, constrained MDPs with linear constraints and maximum entropy exploration under the framework of convex MDPs. We extended the framework to allow convex constraints (Section 5) and explained the objective of GAIL as a convex MDP (Section 6.2). We also discussed non convex objectives (Section 3) and analyzed unsupervised skill discovery via the maximization of mutual information (Section 6.2) as a special case. Finally, we would like to point out a recent work by Geist et al. [25], which was published concurrently to ours, and studies the convex MDP problem from the viewpoint of mean field games. There are also algorithms for convex MDPs that cannot be explained as instances of Algorithm 1. In particular, Zhang et al. [77] proposed a policy gradient algorithm for convex MDPs in which each step of policy gradient involves solving a new saddle point problem (formulated using the Fenchel dual). This is different from our approach since we solve a single saddle point problem iteratively, and furthermore we have much more flexibility about which algorithms the policy player can use. Moreover, for the convergence guarantee [77, Theorem 4.5] to hold, the saddle point problem has to be solved exactly, while in practice it is only solved approximately [77, Algorithm 1], which hinders its sample efficiency. Fenchel duality has also been used in off policy evaluation (OPE) in [46, 74]. The difference between these works and ours is that we train a policy to minimize an objective, while in OPE a target policy is fixed and its value is estimated from data produced by a behaviour policy. In order to solve a practical convex MDP problem in a given domain it would be prudent to use an RL algorithm that is known to be high performing for the vanilla RL problem as the policy player. From the theoretical point of view this could be MDPO or UCRL2, which we have shown come with strong guarantees. From the practical point of view using a high performing DRL algorithm, which may be specific to the domain, will usually yield the best results. For the cost player using FTL, i.e., using the gradient of the objective function, is typically the best choice. Acknowledgments and Disclosure of Funding We would like to thank Yasin Abbasi-Yadkorie, Vlad Mnih, Jacob Abernethy, Lior Shani and Doina Precup for their comments and discussion on this work. Work done at DeepMind, the authors received no specific funding for this work.
1. How does the proposed framework unify various apprenticeship learning and imitation learning algorithms? 2. What are the specific algorithmic recommendations for practitioners in terms of choosing between different algorithms that adjust λ and π? 3. Could you elaborate on the minor adjustments required to Algorithm 1 for supporting CMDPs and other problems with constraints? 4. Why is the experimental section not considered a strength of the submission, and why should it be moved to the appendix? 5. Are there any plans to improve the legibility of the figures on page 9? 6. Should the checklist be included in the main paper or the appendix? 7. Would renaming the optimum value notation (f⋆) to something less similar to the Fenchel conjugate notation (f∗) help avoid confusion? 8. Is Figure 1 truly necessary, or could it be removed without impacting the understanding of the rest of the paper? 9. Is there a typo in the reference cited in line 603 of the appendix?
Summary Of The Paper Review
Summary Of The Paper The paper discusses the problem of minimizing a convex function of the stationary distribution of a system with Markov dynamics, subject to the (convex) Bellman flow constraint and optionally subject to other convex constraints. The paper provides a family of algorithms designed to tackle this problem. It also proves results about sample complexity. The proposed framework is a significant innovation since it unifies several apprenticeship learning and imitation learning algorithms, which have been proved to be useful. Review The submission is original and high-quality. It is clearly written despite the breadth of material it covers. The contribution is significant and a good match for NeurIPS. I have the following questions for the authors: As a practitioner, I am interested in concrete algorithmic recommendations. The paper lists several choices of algorithms that adjust λ and several that adjust π , but at the end of the day, given a concrete problem of the form (2), how do I decide which of these to use? A discussion would be really nice. In lines 162,163 you say that supporting CMDPs (and other problems with constraints) requires a minor adjustment to the algorithm, but you don't specifically define what adjustment. Can you clarify how Algorithm 1 changes? I would also suggest removing the experimental section from the main paper and putting it in the appendix - the strength of your submission lies in the generality of the theoretical contribution. Minor points: The figures on page 9 are completely illegible on a printed copy. Please enlarge and move to the appendix. The checklist should be part of the main paper. The notations f ⋆ for the optimum value and f ∗ for the Fenchel conjugate are so similar as to be confusing. I would rename the optimum value. I don't think Figure 1 is very useful. To put it bluntly, people who can understand the rest of the paper are unlikely to need it. Line 603 in the appendix: "[31, equation 9 1]" should be "[31, equation 9]".
NIPS
Title Reward is enough for convex MDPs Abstract Maximising a cumulative reward function that is Markov and stationary, i.e., defined over state-action pairs and independent of time, is sufficient to capture many kinds of goals in a Markov decision process (MDP). However, not all goals can be captured in this manner. In this paper we study convex MDPs in which goals are expressed as convex functions of the stationary distribution and show that they cannot be formulated using stationary reward functions. Convex MDPs generalize the standard reinforcement learning (RL) problem formulation to a larger framework that includes many supervised and unsupervised RL problems, such as apprenticeship learning, constrained MDPs, and so-called ‘pure exploration’. Our approach is to reformulate the convex MDP problem as a min-max game involving policy and cost (negative reward) ‘players’, using Fenchel duality. We propose a meta-algorithm for solving this problem and show that it unifies many existing algorithms in the literature. 1 Introduction In reinforcement learning (RL), an agent learns how to map situations to actions so as to maximize a cumulative scalar reward signal. The learner is not told which actions to take, but instead must discover which actions lead to the most reward [64]. Mathematically, the RL problem can be written as finding a policy whose state occupancy has the largest inner product with a reward vector [55], i.e., the goal of the agent is to solve RL: max dπ∈K ∑ s,a r(s, a)dπ(s, a), (1) where dπ is the state-action stationary distribution induced by policy π and K is the set of admissible stationary distributions (see Definition 1). A significant body of work is dedicated to solving the RL problem efficiently in challenging domains [45, 62]. However, not all decision making problems of interest take this form. In particular we consider the more general convex MDP problem, Convex MDP: min dπ∈K f(dπ), (2) where f : K → R is a convex function. Sequential decision making problems that take this form include Apprenticeship Learning (AL), pure exploration, and constrained MDPs, among others; see Table 1. In this paper we prove the following claim: We can solve Eq. (2) by using any algorithm that solves Eq. (1) as a subroutine. In other words, any algorithm that solves the standard RL problem can be used to solve the more general convex MDP problem. More specifically, we make the following contributions. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Firstly, we adapt the meta-algorithm of Abernethy and Wang [3] for solving Eq. (2). The key idea is to use Fenchel duality to convert the convex MDP problem into a two-player zero-sum game between the agent (henceforth, policy player) and an adversary that produces rewards (henceforth, cost player) that the agent must maximize [3, 6]. From the agent’s point of view, the game is bilinear, and so for fixed rewards produced by the adversary the problem reduces to the standard RL problem with non-stationary reward (Fig. 1). Secondly, we propose a sample efficient policy player that uses a standard RL algorithm (eg, [35, 60]), and computes an optimistic policy with respect to the non-stationary reward at each iteration. In other words, we use algorithms that were developed to achieve low regret in the standard RL setup, to achieve low regret as policy players in the min-max game we formulate to solve the convex MDP. Our main result is that the average of the policies produced by the policy player converges to a solution to the convex MDP problem (Eq. (2)). Inspired by this principle, we also propose a recipe for using deep-RL (DRL) agents to solve convex MDPs heuristically: provide the agent non-stationary rewards from the cost player. We explore this principle in our experiments. Finally, we show that choosing specific algorithms for the policy and cost players unifies several disparate branches of RL problems, such as apprenticeship learning, constrained MDPs, and pure exploration into a single framework, as we summarize in Table 1. 2 Reinforcement Learning Preliminaries In RL an agent interacts with an environment over a number of time steps and seeks to maximize its cumulative reward. We consider two cases, the average reward case and the discounted case. The Markov decision process (MDP) is defined by the tuple (S,A, P,R) for the average reward case and by the tuple (S,A, P,R, γ, d0) for the discounted case. We assume an infinite horizon, finite state-action problem where initially, the state of the agent is sampled according to s0 ∼ d0, then at each time t the agent is in state st ∈ S, selects action at ∈ A according to some policy π(st, ·), receives reward rt ∼ R(st, at) and transitions to new state st+1 ∈ S according to the probability distribution P (·, st, at). The two performance metrics we consider are given by Javgπ = lim T→∞ 1 T E T∑ t=1 rt, J γ π = (1− γ)E ∞∑ t=1 γtrt, (3) for the average reward case and discounted case respectively. The goal of the agent is to find a policy that maximizes Javgπ or J γ π . Any stationary policy π induces a state-action occupancy measure dπ, which measures how often the agent visits each state-action when following π. Let Pπ(st = ·) be the probability measure over states at time t under policy π, then davgπ (s, a) = lim T→∞ 1 T E T∑ t=1 Pπ(st = s)π(s, a), dγπ(s, a) = (1− γ)E ∞∑ t=1 γtPπ(st = s)π(s, a), for the average reward case and the discounted case respectively. With these, we can rewrite the RL objective in Eq. (3) in terms of the occupancy measure using the following well-known result, which for completeness we prove in Appendix B. Proposition 1. For both the average and the discounted case, the agent objective function Eq. (3) can be written in terms of the occupancy measure as Jπ = ∑ s,a r(s, a)dπ(s, a). Given an occupancy measure it is possible to recover the policy by setting π(s, a) = dπ(s, a)/ ∑ a dπ(s, a) if ∑ a dπ(s, a) > 0, and π(s, a) = 1/|A| otherwise. Accordingly, in this paper we shall formulate the RL problem using the state-action occupancy measure, and both the standard RL problem (Eq. (1)) and the convex MDP problem (Eq. (2)) are convex optimization problems in variable dπ. For the purposes of this manuscript we do not make a distinction between the average and discounted settings, other than through the convex polytopes of feasible occupancy measures, which we define next. Definition 1 (State-action occupancy’s polytope [55]). For the average reward case the set of admissible state-action occupancies is Kavg = {dπ | dπ ≥ 0, ∑ s,a dπ(s, a) = 1, ∑ a dπ(s, a) = ∑ s′,a′ P (s, s′, a′)dπ(s ′, a′) ∀s ∈ S}, and for the discounted case it is given by Kγ = {dπ | dπ ≥ 0, ∑ a dπ(s, a) = (1− γ)d0(s) + γ ∑ s′,a′ P (s, s′, a′)dπ(s ′, a′) ∀s ∈ S}. We note that being a polytope implies that K is a convex and compact set. The convex MDP problem is defined for the tuple (S,A, P, f) in the average cost case and (S,A, P, f, γ, d0) in the discounted case. This tuple is defining a state-action occupancy’s polytope K (Definition 1), and the problem is to find a policy π whose state occupancy dπ is in this polytope and minimizes the function f (Eq. (2)). 3 A Meta-Algorithm for Solving Convex MDPs via RL To solve the convex MDP problem (Eq. (2)) we need to find an occupancy measure dπ (and associated policy) that minimizes the function f . Since both f : K → R and the set K are convex this is a convex optimization problem. However, it is a challenging one due to the nature of learning about the environment through stochastic interactions. In this section we show how to reformulate the convex MDP problem (Eq. (2)) so that standard RL algorithms can be used to solve it, allowing us to harness decades of work on solving vanilla RL problems. To do that we will need the following definition. Definition 2 (Fenchel conjugate). For a function f : Rn → R ∪ {−∞,∞}, its Fenchel conjugate is denoted f∗ : Rn → R ∪ {−∞,∞} and defined as f∗(x) := supy x · y − f(y). Remark 1. The Fenchel conjugate function f∗ is always convex (when it exists) even if f is not. Furthermore, the biconjugate f∗∗ := (f∗)∗ equals f if and only if f is convex and lower semicontinuous. Using this we can rewrite the convex MDP problem (Eq. (2)) as fOPT = min dπ∈K f(dπ) = min dπ∈K max λ∈Λ (λ · dπ − f∗(λ)) = max λ∈Λ min dπ∈K (λ · dπ − f∗(λ)) (4) where Λ is the closure of (sub-)gradient space {∂f(dπ)|dπ ∈ K}, which is a convex set [3, Theorem 4]. As both sets are convex, this is a convex-concave saddle-point problem and a zero-sum two-player game [54, 49], and we were able to swap the order of minimization and maximization using the minimax theorem [71]. With this we define the Lagrangian as L(dπ, λ) := λ · dπ − f∗(λ). For a fixed λ ∈ Λ, minimizing the Lagrangian is a standard RL problem of the form of Eq. (1), i.e., equivalent to maximizing a reward r = −λ. Thus, one might hope that by producing an optimal dual variable λ? we could simply solve d?π = argmindπ∈K L(·, λ ?) for the optimal occupancy measure. However, the next lemma states that this is not possible in general. Lemma 1. There exists an MDP M and convex function f for which there is no stationary reward r ∈ RS×A such that arg maxdπ∈K dπ · r = arg mindπ∈K f(dπ). To see this note that for any reward r there is a deterministic policy that optimizes the reward [55], but for some choices of f no deterministic policy is optimal, eg, when f is the negative entropy function. This result tells us that even if we have access to an optimal dual-variable we cannot simply use it to recover the stationary distribution that solves the convex MDP problem in general. To overcome this issue we develop an algorithm that generates a sequence of policies {πk}k∈N such that the average converges to an optimal policy for Eq. (2), i.e., (1/K) ∑K k=1 d k π → d?π ∈ arg mindπ∈K f(dπ). The algorithm we develop is described in Algorithm 1 and is adapted from the meta-algorithm described in Abernethy and Wang [3]. It is referred to as a meta-algorithm since it relies on supplied sub-routine algorithms Algπ and Algλ. The reinforcement learning algorithm Algπ takes as input a reward vector and returns a state-action occupancy measure dπ. The cost algorithm Algλ can be a more general function of the entire history. We discuss concrete examples of Algπ and Algλ in Section 4. Algorithm 1: meta-algorithm for convex MDPs 1: Input: convex-concave payoff L : K × Λ→ R, algorithms Algλ,Algπ , K ∈ N 2: for k = 1, . . . ,K do 3: λk = Algλ(d 1 π, . . . , d k−1 π ;L) 4: dkπ = Algπ(−λk) 5: end for 6: Return d̄Kπ = 1 K ∑K k=1 d k π, λ̄ K = 1K ∑K k=1 λ k In order to analyze this algorithm we will need a small detour into online convex optimization (OCO). In OCO, a learner is presented with a sequence of K convex loss functions `1, `2, . . . , `K : K → R and at each round k must select a point xk ∈ K after which it suffers a loss of `k(xk). At time period k the learner is assumed to have perfect knowledge of the loss functions `1, . . . , `k−1. The learner wants to minimize its average regret, defined as R̄K := 1 K ( K∑ k=1 `k(xk)−min x∈K K∑ k=1 `k(x) ) . In the context of convex reinforcement learning and meta-algorithm 1, the loss functions for the cost player are `kλ = −L(·, λk), and for the policy player are `kπ = L(dkπ, ·), with associated average regrets R̄πK and R̄ λ K . This brings us to the following theorem. Theorem 1 (Theorem 2, [3]). Assume that Algπ and Algλ have guaranteed average regret bounded as R̄πK ≤ K and R̄λK ≤ δK , respectively. Then Algorithm 1 outputs d̄Kπ and λ̄K satisfying mindπ∈K L(dπ, λ̄K) ≥ fOPT − K − δK and maxλ∈Λ L(d̄Kπ , λ) ≤ fOPT + K + δK . This theorem tells us that so long as the RL algorithm we employ has guaranteed low-regret, and assuming we choose a reasonable low-regret algorithm for deciding the costs, then the meta-algorithm will produce a solution to the convex MDP problem (Eq. (2)) to any desired tolerance, this is because fOPT ≤ f(d̄Kπ ) = maxλ L(d̄Kπ , λ) ≤ fOPT + K +δK . For example, we shall later present algorithms that have regret bounded as K = δK ≤ O(1/ √ K), in which case we have f(d̄Kπ )− fOPT ≤ O(1/ √ K). (5) Non-Convex f . Remark 1 implies that the game maxλ∈Λ mindπ∈K (λ · dπ − f∗(λ)) is concaveconvex for any function f , so we can solve it with Algorithm 1, even for a non-convex f . From weak duality the value of the Lagrangian on the output of Algorithm 1, L(d̄π, λ̄), is a lower bound on the optimal solution fOPT. In addition, since f(dπ) is always an upper bound on fOPT we have both an upper bound and a lower bound on the optimal value: L(d̄π, λ̄) ≤ fOPT ≤ f(d̄π). 4 Policy and Cost Players for Convex MDPs In this section we present several algorithms for the policy and cost players that can be used in Algorithm 1. Any combination of these algorithms is valid and will come with different practical and theoretical performance. In Section 6 we show that several well known methods in the literature correspond to particular choices of cost and policy players and so fall under our framework. In addition, in this section we assume that λmax = max λ∈Λ max s,a |λ(s, a)| <∞, which holds when the set Λ is compact. One way to guarantee that Λ is compact is to consider functions f with Lipschitz continuous gradients (which implies bounded gradients since the set K is compact). For simplicity, we further assume that λmax ≤ 1. By making this assumption we assure that the non stationary rewards produced by the cost player are bounded by 1 as is usually done in RL. 4.1 Cost Player Follow the Leader (FTL) is a classic OCO algorithm that selects λk to be the best point in hindsight. In the special case of convex MDPs, as defined in Eq. (4), FTL has a simpler form: λk = arg max λ∈Λ ∑k−1 j=1 L(djπ, λ) = arg max λ∈Λ ( λ · ∑k−1 j=1 djπ −Kf∗(λ) ) = ∇f(d̄k−1π ), (6) where d̄k−1π = ∑k−1 j=1 d j π and the last equality follows from the fact that (∇f∗)−1 = ∇f [56]. The average regret of FTL is guaranteed to be R̄K ≤ c/ √ K under some assumptions [29]. In some cases, and specifically when the set K is a polytope and the function f is strongly convex, FTL can enjoy logarithmic or even constant regret; see [32, 29] for more details. Online Mirror Descent (OMD) uses the following update [47, 9]: λk = arg max λ∈Λ ( (λ− λk−1) · ∇λL(dk−1π , λk−1) + αkBr(λ, λk−1) ) , where αk is a learning rate and Br is a Bregman divergence [14]. For Br(x) = 0.5||x||22, we get online gradient descent [79] and for Br(x) = x · log(x) we get multiplicative weights [23] as special cases. We also note that OMD is equivalent to a linearized version of Follow the Regularized Leader (FTRL) [43, 28]. The average regret of OMD is R̄K ≤ c/ √ K under some assumptions, see, for example [28]. 4.2 Policy Players 4.2.1 Best Response In OCO, the best response is to simply ignore the history and play the best option on the current round, which has guaranteed average regret bound of R̄K ≤ 0 (this requires knowledge of the current loss function, which is usually not applicable but is in this case). When applied to Eq. (4), it is possible to find the best response dkπ using standard RL techniques since dkπ = arg min dπ∈K Lk(dπ, λk) = arg min dπ∈K dπ · λk − f∗(λk) = arg max dπ∈K dπ · (−λk), which is an RL problem for maximizing the reward (−λk). In principle, any RL algorithm that eventually solves the RL problem can be used to find the best response, which substantiates our claim in the introduction. For example, tabular Q-learning executed for sufficiently long and with a suitable exploration strategy will converge to the optimal policy [72]. In the non-tabular case we could parameterize a deep neural network to represent the Q-values [45] and if the network has sufficient capacity then similar guarantees might hold. We make no claims on efficiency or tractability of this approach, just that in principle such an approach would provide the best-response at each iteration and therefore satisfy the required conditions to solve the convex MDP problem. 4.2.2 Approximate Best Response The caveat in using the best response as a policy player is that in practice, it can only be found approximately by executing an RL algorithm in the environment. This leads to defining an approximate best response via the Probably Approximately Correct (PAC) framework. We say that a policy player is PAC( , δ), if it finds an -optimal policy to an RL problem with probability of at least 1 − δ. In addition, we say that a policy π′ is -optimal if its state occupancy d′π is such that max dπ∈K dπ · (−λk)− d′π · (−λk) ≤ . For example, the algorithm in [40] can find an -optimal policy to the discounted RL problem after seeing O ( SA (1−γ)3 2 log( 1 δ ) ) samples; and the algorithm in [36] can find an -optimal policy for the average reward RL problem after seeing O ( t2mixSA 2 log( 1 δ ) ) samples, where tmix is the mixing time (see, eg, [42, 76] for a formal definition). The following Lemma analyzes the sample complexity of Algorithm 1 with an approximate best response policy player for the average reward RL problem [36]. The result can be easily extended to the discounted case using the algorithm in [40]. Other relaxations to the best response for specific algorithms can be found in [65, 44, 33, 30]. Lemma 2 (The sample complexity of approximate best response in convex MDPs with average occupancy measure). For a convex function f , running Algorithm 1 with an oracle cost player with regret R̄λK = O(1/K) and an approximate best response policy player that solves the average reward RL problem in iteration k to accuracy k = 1/k returns an occupancy measure d̄Kπ that satisfies f(d̄Kπ )− fOPT ≤ with probability 1− δ after seeing O(t2mixSA log(2K/ δ)/ 3δ3) samples. Similarly, for R̄λK = O(1/ √ K), setting k = 1/ √ k requires O(t2mixSA log(2K/ δ)/ 4δ4) samples. 4.2.3 Non-Stationary RL Algorithms We now discuss a different type of policy players; instead of solving an MDP to accuracy , these algorithms perform a single RL update to the policy, with cost −λk. In our setup the reward is known and deterministic but non-stationary, while in the standard RL setup it is unknown, stochastic, and stationary. We conjecture that any RL algorithm can be adapted to the known non-stationary reward setup we consider here. In most cases both Bayesian [51, 48] and frequentist [8, 35] approaches to the stochastic RL problem solve a modified (eg, by adding optimism) Bellman equation at each time period and swapping in a known but non-stationary reward is unlikely to present a problem. To support this conjecture we shall prove that this is exactly the case for UCRL2 [35]. UCRL2 is an RL algorithm that was designed and analyzed in the standard RL setup, and we shall show that it is easily adapted to the non-stationary but known reward setup that we require. To make this claim more general, we will also discuss a similar result for the MDPO algorithm [61] that was given in a slightly different setup. UCRL2 is a model based algorithm that maintains an estimate of the reward and the transition function as well as confidence sets about those estimates. In our case the reward at time k is known, so we only need to consider uncertainty in the dynamics. UCRL2 guarantees that in any iteration k, the true transition function is in a confidence set with high probability, i.e., P ∈ Pk for confidence set Pk. If we denote by JP,Rπ the value of policy π in an MDP with dynamics P and reward R then the optimistic policy is π̃k = arg maxπ maxP ′∈Pk J P ′,−λk π . Acting according to this policy is guaranteed to attain low regret. In the following results for UCRL2 we will use the constant D, which denotes the diameter of the MDP, see [35, Definition 1] for more details. In the supplementary material (Appendix E), we provide a proof sketch that closely follows [35]. Lemma 3 (Non stationary regret of UCRL2). For an MDP with dynamics P, diameter D, an arbitrary sequence of known and bounded rewards { ri : maxs,a |ri(s, a)| ≤ 1 }K i=1 , such that the optimal average reward at episode k, with respect to P and rk is J?k , then with probability at least 1−δ, the average regret of UCRL2 is at most R̄K = 1K ∑K k=1 J ? k−J π̃k k ≤ O(DS √ A log(K/δ)/K). Next, we give a PAC( , δ) sample complexity result for the mixed policy π̄K , that is produced by running Algorithm 1 with UCRL2 as a policy player. Lemma 4 (The sample complexity of non-stationary RL algorithms in convex MDPs). For a convex function f, running Algorithm 1 with an oracle cost player with regret R̄λK ≤ c0/ √ K and UCRL2 as a policy player returns an occupancy measure d̄Kπ that satisfies f(d̄ K π )− fOPT ≤ with probability 1− δ after K = O ( D2S2A δ2 2 log( 2DSA δ ) ) steps. MDPO. Another optimistic algorithm is Mirror Descent Policy Optimization [60, MDPO]. MDPO is a model free RL algorithm that is very similar to popular DRL algorithms like TRPO [58] and MPO [2]. In [24, 59, 5], the authors established the global convergence of MDPO and in [15, 60], the authors showed that MDPO with optimistic exploration enjoys low regret. The analysis for MDPO is given in a finite horizon MDP with horizon H , which is not the focus of our paper. Nevertheless, to support our conjecture that any stochastic RL algorithm can be adapted to the known non-stationary reward setup, we quickly discuss the regret of MDPO in this setup. We also note that MDPO is closer to practical DRL algorithms [70]. In a finite horizon MDP with horizon H and known, non-stationary and bounded rewards, the regret of MDPO is bounded by R̄K ≤ O(H2S √ A/K) [61, Lemma 4] with high probability. To compare this result with UCRL2, we refer to a result from [57], which analyzed UCRL2 in the adversarial setup, that includes our setup as a special case. In a finite horizon MDP with horizon H it was shown that setting δ = SA/K with probability 1 − δ its regret is bounded by R̄K ≤ O(HS √ A log(K)/K) [57, Corollary 5], which is better by a factor of H than MDPO. Discussion. Comparing the results in Lemma 4 with Lemma 2 suggests that using an RL algorithm with non stationary reward as a policy player requires O(1/ 2) samples to find an −optimal policy, while using an approximate best response requires O(1/ 3). In first glance, this results also improves the previously best known result of Hazan et al. [30] for approximate Frank-Wolfe (FW) that requires O(1/ 3) samples. However, there are more details that have to be considered as we now discuss. Firstly, Lemma 4 and Lemma 2 assume access to an oracle cost player with some regret and do not consider how to implement such a cost player. The main challenge is that the cost player does not have access to the true state occupancy and must estimate it from samples. If we do not reuse samples from previous policies to estimate the state occupancy of the current policy we will require O(1/ 3) trajectories overall [30]. A better approach would use the samples from previous episodes to learn the transition function. Then, given the estimated transition function and the policy, we can compute an approximation of the state occupancy. We conjecture that such an approach would lead to a O(1/ 2) sample complexity, closing the gap with standard RL. Secondly, while our focus is on the dependence in , our bound Lemma 4 is not tight in δ, i.e., it scales with 1/δ2 where it should be possible to achieve a log(1/δ) scaling. Again we conjecture an improvement in the bound is possible; see, eg, [38, Appendix F.]. 5 Convex Constraints We have restricted the presentation so far to unconstrained convex problems, in this section we extend the above results to the constrained case. The problem we consider is min dπ∈K f(dπ) subject to gi(dπ) ≤ 0, i = 1, . . .m, where f and the constraint functions gi are convex. Previous work focused on the case where both f and gi are linear [7, 67, 12, 68, 18, 16, 11]. We can use the same Fenchel dual machinery we developed before, but now taking into account the constraints. Consider the Lagrangian L(dπ, µ) = f(dπ) + ∑m i=1 µigi(dπ) = max ν (ν · dπ − f∗(ν)) + ∑m i=1 µi max vi (dπvi − g∗i (vi)) . over dual variables µ ≥ 0, with new variables vi and ν. At first glance this does not look convexconcave, however we can introduce new variables ζi = µivi to obtain L(dπ, µ, ν, ζ1, . . . , ζm) = ν · dπ − f∗(ν) + ∑m i=1 (dπζi − µig∗i (ζi/µi)) . (7) This is convex (indeed affine) in dπ and concave in (ν, µ, ζ1, . . . , ζm), since it includes the perspective transform of the functions gi [13]. The Lagrangian involves a cost vector, ν + ∑m i=1 ζi, linearly interacting with dπ, and therefore we can use the same policy players as before to minimize this cost. For the cost player, it is possible to use OMD on Eq. (7) jointly for the variables ν, µ and ζ. It is more challenging to use best-response and FTL for the cost-player variables as the maximum value of the Lagrangian is unbounded for some values of dπ. Another option is to treat the problem as a three-player game. In this case the policy player controls dπ as before, one cost player chooses (ν, ζ1, . . . , ζm) and can use the algorithms we have previously discussed, and the other cost player chooses µ with some restrictions on their choice of algorithm. Analyzing the regret in that case is outside the scope of this paper. 6 Examples In this section we explain how existing algorithms can be seen as instances of the meta-algorithm for various choices of the objective function f and the cost and policy player algorithms Algλ and Algπ . We summarized the relationships in Table 1. 6.1 Apprenticeship Learning In apprenticeship learning (AL), we have an MDP without an explicit reward function. Instead, an expert provides demonstrations which are used to estimate the expert state occupancy measure dE . Abbeel and Ng [1] formalized the AL problem as finding a policy π whose state occupancy is close to that of the expert by minimizing the convex function f(dπ) = ||dπ − dE ||. The convex conjugate of f is given by f∗(y) = y · dE if ||y||∗ ≤ 1 and∞ otherwise, where || · ||∗ denotes the dual norm. Plugging f∗ into Eq. (4) results in the following game: min dπ∈K ||dπ − dE || = min dπ∈K max ||λ||∗≤1 λ · dπ − λ · dE . (8) Inspecting Eq. (8), we can see that the norm in the function f that is used to measure the distance from the expert induces a constraint set for the cost variable, which is a unit ball in the dual norm. Algλ=OMD, Algπ=Best Response/RL. The Multiplicative Weights AL algorithms [65, MWAL] was proposed to solve the AL problem with f(dπ) = ||dπ − dE ||∞. It uses the best response as the policy player and multiplicative weights as the cost player (a special case of OMD). MWAL has also been used to solve AL in contextual MDPs [10] and to find feasible solutions to convex-constrained MDPs [44]. We note that in practice the best response can only be solved approximately, as we discussed in Section 4. Instead, in online AL [61] the authors proposed to use MDPO as the policy player, which guarantees a regret bound of R̄K ≤ c/ √ K. They showed that their algorithm is equivalent to Wasserstein GAIL [73, 78] and in practice tends to perform similarly to GAIL. Algλ=FTL, Algπ=Best Response. When the policy player plays the best response and the cost player plays FTL, Algorithm 1 is equivalent to the Frank-Wolfe algorithm [22, 3] for minimizing f (Eq. (2)). Pseudo-code for this is included in the appendix (Algorithm 3). The algorithm finds a point dkπ ∈ K that has the largest inner-product (best response) with the negative gradient (i.e., FTL). Abbeel and Ng [1] proposed two algorithms for AL, the projection algorithm and the max margin algorithm. The projection algorithm is essentially a FW algorithm, as was suggested in the supplementary [1] and was later shown formally in [75]. Thus, it is a projection free algorithm in the sense that it avoids projecting dπ into K, despite the name. In their case the gradient is given by ∇f (dπ) = dπ − dE . Thus, finding the best response is equivalent to solving an MDP whose reward is dE − dπ . In a similar fashion, FW can be used to solve convex MDPs more generally [30]. Specifically, in [30], the authors considered the problem of pure exploration, which they defined as finding a policy that maximizes entropy. Fully Corrective FW. The FW algorithm has many variants (see [33] for a survey) some of which enjoy faster rates of convergence in special cases. Concretely, when the constraint set is a polytope, which is the case for convex MDPs (Definition 1), some variants achieve a linear rate of convergence [34, 75]. One such variant is the Fully corrective FW, which replaces the learning rate update (see line 4 of Algorithm 3 in the supplementary), with a minimization problem over the convex hull of occupancy measures at the previous time-step. This is guaranteed to be at least as good as the learning rate update. Interestingly, the second algorithm of Abbeel and Ng [1], the max margin algorithm, is exactly equivalent to this fully corrective FW variant. This implies that the max-margin algorithm enjoys a better theoretical convergence rate than the ‘projection’ variant, as was observed empirically in [1]. 6.2 GAIL and DIAYN: Algλ=FTL, Algπ=RL We now discuss the objectives of two popular algorithms, GAIL [31] and DIAYN [20], which perform AL and diverse skill discovery respectively. Our analysis suggests that GAIL and DIAYN share the same objective function. In GAIL, this objective function is minimized, which is a convex MDP, however, in DIAYN it is maximized, which is therefore not a convex MDP. We start the discussion with DIAYN and follow with a simple construction showing the equivalence to GAIL. DIAYN. Discriminative approaches [26, 20] rely on the intuition that skills are diverse when they are entropic and easily discriminated by observing the states that they visit. Given a probability space (Ω,F ,P), state random variables S : Ω→ S and latent skills Z : Ω→ Z with prior p, the key term of interest being maximized in DIAYN [20] is the mutual information: I(S;Z) = Ez∼p;s∼dzπ [log p(z|s)− log p(z)], (9) where dzπ is the stationary distribution induced by the policy π(a | s, z). For each skill z, this corresponds to a standard RL problem with (conditional) policy π(a | s, z) and reward function r(s|z) = log p(z|s) − log p(z). The first term encourages the policy to visit states for which the underlying skill has high-probability under the posterior p(z | s), while the second term ensures a high entropy distribution over skills. In practice, the full DIAYN objective further regularizes the learnt policy by including entropy terms − log π(a | s, z). For large state spaces, p(z|s) is typically intractable and Eq. 9 is replaced with a variational lower-bound, where the true posterior is replaced with a learned discriminator qφ(z|s). Here, we focus on the simple setting where z is a categorical distribution over |Z| outcomes, yielding |Z| policies πz , and qφ is a classifier over these |Z| skills with parameters φ. We now show that a similar intrinsic reward can be derived using the framework of convex MDPs. We start by writing the true posterior as a function of the per-skill state occupancy dzπ = p(s | z), and using Bayes rules, p(z|s) = d z π(s)p(z)∑ k d k π(s)p(k) . Combing this with Eq. (9) yields: Ez∼p(z),s∼dzπ [log p(z|s)− p(z)] = ∑ z p(z) ∑ s dzπ(s) [ log ( dzπ(s)p(z)∑ k d k π(s)p(k) ) − log p(z) ] = ∑ z p(z)KL(dzπ|| ∑ k p(k)dkπ) = EzKL(dzπ||Ekdkπ), (10) where KL denotes the Kullback–Leibler divergence [39]. Intuitively, finding a set of policies π1, . . . , πz that minimize Eq. (10) will result in finding policies that visit similar states, measured using the KL distance between their respective state occupancies d1π, . . . , d z π. This is a convex MDP because the KL-divergence is jointly convex in both arguments [13, Example 3.19]. We will soon show that this is the objective of GAIL. On the other hand, a set of policies that maximize Eq. (10) is diverse, as the policies visit different states, measured using the KL distance between their respective state occupancies d1π, . . . , d z π . We follow on with deriving the FTL player for the convex MDP in Eq. (10). We will then show that this FTL player is producing an intrinsic reward that is equivalent to the intrinsic reward used in GAIL and DIAYN (despite the fact that DIAYN is not a convex MDP). According to Eq. (6), the FTL cost player will produce a cost λk at iteration k given by ∇dzπKL(d z π|| ∑ k p(k)dkπ) = E z∼p(z) [ log dzπ∑ k d k πp(k) + 1− d z πp(z)∑ k d k πp(k) ] = E z∼p(z) [ log(p(z|s))− log(p(z))︸ ︷︷ ︸ Mutual Information +1− p(z|s)︸ ︷︷ ︸ Gradient correction ] , (11) where the equality follows from writing the posterior as a function of the per-skill state occupancy dzπ = p(s | z), and using Bayes rules, p(z|s) = dzπ(s)p(z)∑ k d k π(s)p(k) . Replacing the posterior p(z|s) with a learnt discriminator qφ(z|s) recovers the mutual-information rewards of DIAYN, with additional terms 1− p(z | s) which we refer to as “gradient correction” terms. Inspecting the common scenario of a uniform prior over the latent variables, p(z) = 1/|Z|, we get that the expectation of the gradient correction term ∑ z p(z)(1− p(z|s)) = 1− 1/|Z| in each state. From the perspective of the policy player, adding a constant to the reward does not change the best response policy, nor the optimistic policy. Therefore, the gradient correction term does not have an effect on the optimization under a uniform prior, and we retrieved the reward of DIAYN. These algorithms differ however for more general priors p(z), which we explore empirically in Appendix F. GAIL. We further show how Eq. (10) extends to GAIL [31] via a simple construction. Consider a binary latent space of size |Z| = 2, where z = 1 corresponds to the policy of the agent and z = 2 corresponds to the policy of the expert which is fixed. In addition, consider a uniform prior over the latent variables, i.e., p(z = 1) = 12 . By removing the constant terms in Eq. (11), one retrieves the GAIL [31] algorithm. The cost log(p(z|s)) is the probability of the discriminator to identify the agent, and the policy player is MDPO (which is similar to TRPO in GAIL). 7 Discussion In this work we reformulated the convex MDP problem as a convex-concave game between the agent and another player that is producing costs (negative rewards) and proposed a meta-algorithm for solving it. We observed that many algorithms in the literature can be interpreted as instances of the metaalgorithm by selecting different pairs of subroutines employed by the policy and cost players. The Frank-Wolfe algorithm, which combines best response with FTL, was originally proposed for AL [1, 75] but can be used for any convex MDP problem as was suggested in [30]. Zhang et al. [77], unified the problems of RL, AL, constrained MDPs with linear constraints and maximum entropy exploration under the framework of convex MDPs. We extended the framework to allow convex constraints (Section 5) and explained the objective of GAIL as a convex MDP (Section 6.2). We also discussed non convex objectives (Section 3) and analyzed unsupervised skill discovery via the maximization of mutual information (Section 6.2) as a special case. Finally, we would like to point out a recent work by Geist et al. [25], which was published concurrently to ours, and studies the convex MDP problem from the viewpoint of mean field games. There are also algorithms for convex MDPs that cannot be explained as instances of Algorithm 1. In particular, Zhang et al. [77] proposed a policy gradient algorithm for convex MDPs in which each step of policy gradient involves solving a new saddle point problem (formulated using the Fenchel dual). This is different from our approach since we solve a single saddle point problem iteratively, and furthermore we have much more flexibility about which algorithms the policy player can use. Moreover, for the convergence guarantee [77, Theorem 4.5] to hold, the saddle point problem has to be solved exactly, while in practice it is only solved approximately [77, Algorithm 1], which hinders its sample efficiency. Fenchel duality has also been used in off policy evaluation (OPE) in [46, 74]. The difference between these works and ours is that we train a policy to minimize an objective, while in OPE a target policy is fixed and its value is estimated from data produced by a behaviour policy. In order to solve a practical convex MDP problem in a given domain it would be prudent to use an RL algorithm that is known to be high performing for the vanilla RL problem as the policy player. From the theoretical point of view this could be MDPO or UCRL2, which we have shown come with strong guarantees. From the practical point of view using a high performing DRL algorithm, which may be specific to the domain, will usually yield the best results. For the cost player using FTL, i.e., using the gradient of the objective function, is typically the best choice. Acknowledgments and Disclosure of Funding We would like to thank Yasin Abbasi-Yadkorie, Vlad Mnih, Jacob Abernethy, Lior Shani and Doina Precup for their comments and discussion on this work. Work done at DeepMind, the authors received no specific funding for this work.
1. What is the focus of the paper regarding extended reinforcement learning? 2. What are the strengths and weaknesses of the proposed methodology in comparison to prior works? 3. How does the reviewer assess the presentation and clarity of the paper's content? 4. What are the doubts regarding technical novelty and contributions? 5. What are the suggestions for improving the paper's content and experimental evaluation?
Summary Of The Paper Review
Summary Of The Paper This paper addresses an extended reinforcement learning setting with general utilities, where the agent's objective can be any convex function of the state-action distribution. Especially, it recasts the problem as a min-max game between a policy player and a cost player, and it describes a principled methodology to solve the game. Finally, it provides ways to instantiate the methodology with known algorithms, and a brief experimental evaluation in simple domains. Review POST-REBUTTAL The score has been updated for the reasons explained in the comment below. This paper studies a very relevant problem, which practically unifies many RL objectives of interest under a unique setting, and it provides a neat formulation of the framework as a two-player min-max game. However, the major takes of this paper are not particularly surprising w.r.t. prior works, and some choices in the presentation of the practical algorithms are quite confusing. Thus, I am currently evaluating this work as borderline, though a convincing authors' response and slight changes to the main text might better clarify the value of the paper. I provide below some detailed comments. TECNICHAL NOVELTY The Convex MDP framework has been previously formulated in (J. Zhang et al., 2020). The meta-algorithm and its analysis are taken from (Abernethy and Wang, 2017). The FTL and best response combination is practically equivalent to the algorithm in (Hazan et al., 2019). Even the min-max game interpretation, which is novel to my knowledge, is somehow suggested by the work in (J. Zhang et al., 2020), where the gradient computation requires to solve a max-min problem between the policy update and the reward. I believe that this work successfully meshes these ingredients in a more coherent presentation (Sect.1 to Sect.3), but I have some doubts on the actual novelty of the contributions. Can the authors clearly specify the technical novelty their work is contributing w.r.t. prior works? PRESENTATION AND CONTRIBUTIONS I found the paper quite neat and clear from Sect.1 to Sect.3, whereas it becomes somewhat confusing thereafter. Instead of the various alternatives provided by Sect.4, I would have rather focused on a single algorithm to tackle convex RL in an online setting, then providing either a complete theoretical analysis or a more thorough experimental evaluation in the next sections. Especially, some of the questions that could be targeted are: There exists any separation between the learning complexity of the Convex MDP setting w.r.t. the standard MDP setting? Can we deploy a single algorithm that is able to address the variety of objectives reported in Table 1 in a practical way? GENERAL METHODOLOGY It is not completely clear to me why different objectives should require different implementations of the policy player and the cost player, rather than just considering the Fenchel dual of the respective objective function. The authors rightly noted that the algorithm from (J. Zhang et al., 2020) is committing to a specific methodology (policy gradient), but a key feature of their approach is that it can be employed with any convex objective. Can the authors clarify why they are not providing an objective-agnostic implementation of Algorithm 1?
NIPS
Title Approximate Secular Equations for the Cubic Regularization Subproblem Abstract The cubic regularization method (CR) is a popular algorithm for unconstrained non-convex optimization. At each iteration, CR solves a cubically regularized quadratic problem, called the cubic regularization subproblem (CRS). One way to solve the CRS relies on solving the secular equation, whose computational bottleneck lies in the computation of all eigenvalues of the Hessian matrix. In this paper, we propose and analyze a novel CRS solver based on an approximate secular equation, which requires only some of the Hessian eigenvalues and is therefore much more efficient. Two approximate secular equations (ASEs) are developed. For both ASEs, we first study the existence and uniqueness of their roots and then establish an upper bound on the gap between the root and that of the standard secular equation. Such an upper bound can in turn be used to bound the distance from the approximate CRS solution based ASEs to the true CRS solution, thus offering a theoretical guarantee for our CRS solver. A desirable feature of our CRS solver is that it requires only matrix-vector multiplication but not matrix inversion, which makes it particularly suitable for high-dimensional applications of unconstrained non-convex optimization, such as low-rank recovery and deep learning. Numerical experiments with synthetic and real data-sets are conducted to investigate the practical performance of the proposed CRS solver. Experimental results show that the proposed solver outperforms two state-of-the-art methods. 1 Introduction The cubic regularization method (CR) is a variant of Newton’s method proposed by Griewank [8], and later independently by Nesterov and Polyak [12], and Weiser et al. [16]. It gained significant attention over the last decade due to its attractive theoretical properties, such as convergence to second-order critical points[12] and quadratic convergence rate under mild assumptions [17]. Each iteration of CR solves a problem of the following form, called the cubic regularization subproblem (CRS): min x2Rn fA,b,⇢(x) := b Tx+ 1 2 xTAx+ ⇢ 3 kxk3, (1) where ⇢ > 0 is the regularization parameter, b 2 Rn, and A 2 Rn⇥n is a symmetric matrix, not necessarily positive semidefinite. Many variants and generalizations of CR are developed, including 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the Adaptive Regularization Using Cubics (ARC) which allows for a dynamic choice of ⇢ and inexact CRS solutions [3, 4], accelerated CR using momentum [15] and stochastic CR for solving stochastic optimization [14]. Despite the theoretical success, the practicality of CR and its variants relies critically on the CRS solver, a topic that attracts considerable research recently [2, 1, 10, 9]. The goal of this paper is to develop a novel, efficient CRS solver along with theoretical guarantees. A popular approach for solving the CRS is via solving the so-called secular equation. We now review this approach. Towards that, we denote by 1 · · · n the eigenvalues of A and by v1, · · · ,vn the corresponding eigenvectors. In other words, we have the eigendecomposition A = Pn i=1 iviv T i = V⇤V T, where ⇤ = diag( 1, . . . , n) and V = [v1, · · · ,vn]. Note that eigenvalues i are not necessarily positive due to the indefiniteness of the matrix A. Also, we denote the Euclidean norm by k · k. Proposition 1 ([12, 3]). A vector x⇤ solves the CRS (1) if and only if it satisfies the system ⇢ (A+ ⇢kx⇤kI)x⇤ + b = 0, A+ ⇢kx⇤kI ⌫ 0. (2) (3) Moreover, if A+ ⇢kx⇤kI 0, then x⇤ is the unique solution (and hence a critical point). Proposition 2 ([1]). Let x⇤ be a global solution of CRS (1) and the eigendecomposition for A = n i=1 iviv T i = V⇤V T , where ⇤ = diag( 1, . . . , n) and V = [v1, · · · ,vn]. If bTv1 6= 0, then A + ⇢kx⇤kI 0 and the solution x⇤ is the unique critical point (and hence the unique solution). Conversely, if bTv1 = 0, then the CRS (1) has multiple optimal solutions. From Proposition 2, if bTv1 6= 0, then there is only one critical point, which is also the optimal solution, and hence the gradient norm krfA,b,⇢(x)k serves as an optimality measure. Throughout the paper, we assume bTv1 6= 0, under which the CRS is said to be in the easy case. This is without much loss of generality as this holds generically true in practice. Moreover, we could easily avoid the hard case (bTv1 = 0) by slightly perturbing the vector b, see [12, 2]. To introduce the secular equation, note that in the easy case, conditions (2) and (3) can be written as ⇢ (⇤+ I) · y⇤ = c, 1 + > 0. where =: ⇢kx⇤k, [y⇤1 , · · · , y⇤n]T := y⇤ = VTx⇤ and [c1, · · · , cn]T := c = VTb. Therefore, y⇤i = ci i + , i = 1, . . . , n. Since the Euclidean norm is invariant to orthogonal transformation, we have 2 ⇢2 = kx⇤k2 = ky⇤k2 = nX i=1 c2i ( i + )2 . Consequently, instead of solving the complicated nonlinear system (2)-(3), we could solve the CRS (1) by first finding the (unique) root > max{ 1, 0} of the equation w( ) = nX i=1 c2i ( i + )2 2 ⇢2 , (4) called the secular equation, and then solves the linear system (A+ I)x = b. The first step can be done efficiently by using existing root-finding algorithms (e.g., the bisection method and Newton’s method etc.). The disadvantage of the above CRS solver, based on the secular equation (4), is that it requires the full spectrum of A, which costs O(n3). This approach is viable only for low- to moderate-dimensional problems. However, when n is large, computing all eigenvalues of A is prohibitive. Worse still, after the root is solved, we still need to apply iterative methods (e.g., Lanczos method) to solve the large-scale linear system (2). We are thus motivated to approximate the secular equation by using only some of the eigenvalues of A, as opposed to all. As our main contribution, we developed two different approximate secular equations (ASEs), both of which require computing m < n eigenvalues of A. The cost for forming the approximate secular equations is only O(mn2), and hence the resulting CRS solver is much more efficient and scalable. On the theoretical side, for each of the proposed approximate secular equations, we first studied the existence and uniqueness of its root, and then derived an upper bound on the gap between the root and that of the standard secular equation (4). This upper bound is in turn used to bound the distance from the approximate CRS solution based ASEs to the true CRS solution, thus offering a theoretical guarantee for the proposed CRS solver. A desirable feature of our CRS solver is that it requires only matrix-vector multiplication but not matrix inversion, which makes it particularly suitable for high-dimensional applications of unconstrained non-convex optimization, such as low-rank recovery and deep learning. On the empirical side, we conducted experiments with both synthetic and real problem instances to investigate the practical performance of the proposed CRS solver and the associated CR. Experimental results showed that the proposed solver outperforms two state-of-the-art methods. The selection of m for the proposed ASEM is an interesting and crucial topic. We will discuss related issues in Section 4 and some numerical explorations are also presented in Section 5. 2 The First-Order Truncated Secular Equation We define the first-order truncated secular equation by w1( ;µ) = mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 2 ⇢2 , (5) where µ m is an input parameter that approximates the unobserved eigenvalues m+1, · · · , n, ci = bTvi and Pn i=m+1 c 2 i = kbk 2 Pm i=1 c 2 i . Note that only m eigenvalues 1, · · · , m and their corresponding eigenvectors v1, · · · ,vm are needed to form (5), which is computationally friendlier compared with (4). The name first-order truncated secular equation comes from the fact that w1( , µ) is the first-order Taylor approximation to the function w( ). Below we will first study the existence and uniqueness for the root of (5). Then, we derive an error bound for the root. 2.1 Existence and Uniqueness for the Root In the easy case that bTv1 6= 0 (equivalently, c1 6= 0), the solution x⇤ to the CRS (1) is unique, which implies the existence and uniqueness for the root ⇤ of (4). To show that our proposed CRS solver is also well-defined, we prove the existence and uniqueness of the root of the first-order truncated secular equation (5). Lemma 1. For any µ m, the function w1(·;µ) as defined in (5) admits a unique root. Proof. Existence. We first consider the case when 1 0. Then, for any fixed µ m, lim !( 1)+ w1( ;µ) = +1 and lim !+1 w1( ;µ) = 1, By the intermediate value theorem, w1(·;µ) has a root in ( 1,+1). For 1 > 0, we have w1(0;µ) > 0 and lim !+1 w1( ;µ) = 1. Therefore, w1( ;µ) has a root in (0,+1). Uniqueness. Note that w1( ;µ) is monotonically decreasing for 2 ( 1,+1) and 2 (0,+1) when 1 0 and 1 > 0, respectively. Therefore, the uniqueness of the root for w1( ;µ) is guaranteed. 2.2 Error Analysis In order to study the quality of the CRS solution based on our proposed solver using approximate secular equations, we need to study the quality of the root to the first-order truncated secular equation, denoted by ⇤1 . Towards that end, we provide an upper bound on the gap | ⇤1 ⇤| between ⇤1 and the root ⇤ of the exact secular equation (4). Theorem 1. Let ⇤1 and ⇤ be the unique roots of w1( ;µ) and w( ), respectively. Then | ⇤1 ⇤ | Cm · max m+1in | i µ|, (6) where Cm > 0 is a constant, upper bounded by 2kbk2 ( m 1)3 · min n ( d+B1) 3 2kbk2 , ⇢2 2B1 o with B1 = 1+ p 21+4⇢·kbk 2 being an upper bound for | ⇤ 1 |. We clearly see that the right-hand side of inequality (1) is decreasing in m. This confirms that using more eigen information (i.e., larger m) helps to reduce the error | ⇤1 ⇤|. The proof of Theorem 1 is technical and quite long and hence relegated to Appendix A. The approximation quality of our CRS solver is guaranteed by combining Theorem 1 with the following proposition. Proposition 3. Let x⇤ and x̃ be solutions to the equations (A+ ⇤I)x⇤ = b and (A+ ⇤1I) x̃ = b, respectively. Then, kx̃ x⇤k = O (| ⇤1 ⇤ |). Proof. By definition, we have x⇤ = nX i=1 ( i + ⇤) 1 viv T i · ( b) = nX i=1 ( i + ⇤) 1 ci · vi, and x̃ = nX i=1 ( i + ⇤ 1) 1 viv T i · ( b) = nX i=1 ( i + ⇤ 1) 1 ci · vi, then kx̃ x⇤k = nX i=1 ⇣ ( i + ⇤ 1) 1 ( i + ⇤) 1 ⌘ viv T i · ( b) ✓ max 1in n ( i + ⇤ 1) 1 ( i + ⇤) 1 o◆ · kbk = O (| ⇤1 ⇤ |) . This completes the proof. Before ending this section, some remarks are in order. First, the parameter µ acts as an approximation to n m unknown eigenvalues m+1, · · · , n. An intuitive choice of µ that works well in practice and is computationally cheap is the average of unknown eigenvalues, i.e., µ1 = Pn i=m+1 i n m = tr(A) Pm i=1 i n m . (7) Second, the error bound Cm ·maxm+1in | i µ| in Theorem 1 depends on the distribution of eigenvalues of A. If the unobserved eigenvalues m+1, · · · , n cluster around a small interval, then with a suitable choice of µ 2 [ m+1, n], maxm+1in | i µ| is small. Conversely, if the unknown eigenvalues spread over a large interval, then it is hard to make the error maxm+1in | i µ| small. Third, it is instructive to study the error bound (6) under some random matrix model for A. Suppose that A = eA/ p 2n, where eA is a symmetric random matrix with i.i.d. entries on and above the diagonal. By the Wigner semicircle law [6], as n ! 1, the eigenvalues of A distribute according to a density of a semi-circle shape. In particular, we can deduce that with a probability of 1 o(1), max m+1in | i µ| O ✓ 1 m+ 1 n ◆2/3! ⇡ ✓ 3⇡ 4 p 2 ◆2/3 · ✓ 1 m+ 1 n ◆2/3 (8) The detailed proof of (8) and further discussions under random A can be found in Appendix C. 3 The Second-Order Truncated Secular Equation Similarly to the equation (5), but with the second-order Taylor approximation, we define the secondorder truncated secular equation by w2( ;µ) = mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 2 nX i=m+1 c2i · ( i µ) (µ+ )3 2 ⇢2 , (9) where µ m is an input parameter that approximates the unobserved eigenvalues m+1, · · · , n. 3.1 Existence and Uniqueness for the Root The lemma blew shows the existence and uniqueness of the root of w2(·;µ). Lemma 2. With µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i , (10) the function w2(·;µ) as defined in (9) admits a unique root. Proof. When µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i , the third summation in the definition (9) vanishes, and hence w2( , µ) becomes the same as w1( , µ), except with a specific choice of µ. The desired conclusion then follows from Lemma 1. Unlike its first-order counterpart, we do not develop the existence and uniqueness of the root of the second-order truncated secular equation for arbitrary µ. The reason is that when 1 > 0, w2(0;µ) can potentially be positive or negative. 3.2 Error Analysis Similar to that for the first-order truncated secular equation, we can also derive an error bound for the root of the second-order truncated secular equation. Theorem 2. Let ⇤2 and ⇤ be the unique root of w2( ;µ) and w( ), respectively, and µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i . Then, | ⇤2 ⇤ | Cm · max m+1in ( i µ) 2, (11) where Cm > 0 is a constant bounded by 3kbk2 ( m 1)4 · min n ( n+B1) 3 2kbk2 , ⇢2 2B1 o with B1 = 1+ p 21+4⇢·kbk 2 being an upper bound for | ⇤ 2 |. The proof of Theorem 2 can be found in Appendix B. We can similarly estimate the approximation quality by combining Theorem 2 and Proposition 3. Again, the right-hand side of the error bound (11) is decreasing in m. We should also point out that the CRS solver based on the second-order secular equation outperforms the first-order counterpart only if maxm+1in | i µ|/| m 1| is small enough. The computation of µ here requires cm+1, · · · , cn, which seem to be inaccessible. We provide a tractable form for µ in (13) and will discuss it in the next part. 4 Implementation Details We now discuss the implementation details for solving the proposed first-order secular equation for CRS. First, we obtain the partial eigen information { 1, · · · , m} and {v1, · · · ,vm} by Krylov subspace methods. Note that only Hessian-vector products are required. This is computationally friendlier than other methods that rely on matrix inversions and is particularly suitable for modern, high-dimensional applications. Then, we solve the first-order secular equation (5) with µ defined in (7) or (10), using any root-finding algorithm, such as Newton’s method. Finally, we solve the linear system (A+ ⇤I)x = b by iterative algorithms, e.g., the Lanczos method and the conjugate gradient method. The resulting CRS solver, namely the approximate secular equation method (ASEM), is summarized as follows: Step 1: obtaining the partial eigen information { 1, · · · , m} and {v1, · · · ,vm} of A. Step 2: solving the secular equation (5) with µ defined in (7) or (10); we get ⇤. Step 3: iteratively solving the linear system (A+ ⇤I)x+ b = 0. Output: the solution x. Details for Step 1. The Krylov subspace is one of the most popular iterative methods in solving eigen problems with O(mn2) computational cost [13]. The Lanczos decomposition for a real symmetric matrix B satisfies BUk = TkUk + kuk+1e T k , where Uk 2 Rn⇥k is an orthonormal matrix (i.e., UTkUk = Ik), Tk 2 Rk⇥k is a symmetric tridiagonal matrix and ek 2 Rk is the k-th standard basis vector in Rk. Lanczos observed that even for comparatively small k, Tk approximates B very well in terms of eigenvalues and eigenvectors. Specifically, for a suitable eigenpair ( ,w) of Tk with Tkw = · w, the pair ( ,Ukw) is an approximate eigenpair of B, i.e., Bz ⇡ ·z with z = Ukw. Here, the Krylov subspace is constructed by u1, the first column of Uk = [u1, · · · ,uk], i.e., Kk(B,u1). Note that Tk approximates B for eigenvalues with largest modulus (or absolute values) and the corresponding eigenvectors. Empirically, for calculating m eigenvalues of B with largest absolute values and the corresponding eigenvectors, we usually construct the Krylov subspace Kk(B,u1) with dimension k = max{2m, 20}. The base vector u1 is also essential for the Krylov subspace method. Moreover, restarting is adopted to iteratively update the base vector u1. To the best of our knowledge, the mentioned iterative method for partial eigen information is supported in many softwares, e.g., Matlab (eigs function) and Python (Scipy package) etc. For more details, please refer to ARPACK [11]. Returning back to the proposed algorithm with A, instead of calculating the largest (in terms of the absolute value) m eigenvalues of A, we aim to get m (algebraically) smallest eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm}. We first roughly calculate a shift value kAk by several steps of power iteration (Hessian-vector products). Then, let B = ·I A, whose eigenvalues { n, · · · , 1} are non-negative and the corresponding eigenvectors are {vn, · · · ,v1}. Applying the mentioned Krylov subspace algorithm, we first obtain an estimate of shifted eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm} for A, since { m, · · · , 1} are largest eigenvalues of B. To further lower the computational cost, we may adopt k-dimensional Krylov subspace Kk(B,u1) for m eigenvalues without restarting in implementation with k = m. Details for Step 2. Instead of directly solving (5), Cartis et al. [3] recommended to find the root for the equivalent equation: w̃1( ;µ) = vuut mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 ⇢ , (12) which is convex on ( 1,+1). Moreover, under perfect initialization, Newton’s method is proved to achieve (locally) quadratic convergence. However, we numerically find that it depends much on the initialization and may converge to a point outside the feasible domain ( 1,+1) if it has imperfect initialization. Here, we recommend to use the bisection method to find the root of (5) or (12) due to its linear convergence, stability and ease of implementation. For the weighted average µ defined in (10), we can rewrite it as a more tractable but equivalent form: µ2 = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i = bT(A Vm⇤mVTm)b kbk2 Pm i=1 c 2 i = bTAb Pm i=1 c 2 i · i kbk2 Pm i=1 c 2 i , (13) where Vm = [v1, · · · ,vm] and ⇤m = diag( 1, · · · , m). Details for Step 3. There are many well-studied, efficient and reliable iterative methods for (real symmetric) linear systems, e.g., Krylov subspace (Lanczos) methods and conjugate gradient methods etc. We adopt the Lanczos method for solving the linear system (A+ ⇤I)x+ b = 0, where only a few steps of Hessian-vector products are required. In summary, the main computational cost comes from Step 1 and Step 3 for Hessian-vector products (O(mn2)), since solving the root of w1( ;µ) is a 1-dimensional problem in Step 2 and is of cost O(n). Therefore, the total computational cost for the proposed algorithm is O(mn2), much lower than the method based on full eigendecomposition (O(n3)). The selection of m. The choice of the parameter m is important to our CRS solver: a larger m yields a better CRS solution quality but incurs a higher computational cost. If A is a Gaussian random matrix, by (8), we can achieve "-accuracy (i.e., | ⇤1 ⇤| ") if m n satisfies ✓ 3⇡ 4 p 2 ◆2/3 · ✓ 1 m+ 1 n ◆2/3 ". However, the error bound (8) provides only a conservative sufficient conditions for m. Moreover, for general problems without the Gaussian assumption on A, it is hard to choose m based on the "-accuracy, because the error bounds (6) and (11) are implicit in m. Therefore, adaptive methods (or heuristic methods) for selecting m are necessary in practice. A natural way is to check the suboptimality (gradient norm) in each step, and enlarge m by m0 (i.e., we set m = max{m + m0,mmax}, where mmax is the maximal number of eigenvalues we adopt in ASEM), if the output does not satisfy the given condition for suboptimality. Moreover, numerical experiments on CUTEst problems (see Experiment 6 in Section 5) shows that m = 1 is enough for most of the cases. We left the study for the selection of m as future work. To the best of our knowledge, the Krylov subspace method [3, 2] for CRS suffers from a similar issue of hyperparameter selection. 5 Experimental Results Without the loss of generality (see Appendix D), we assume that A is diagonal in the synthetic CRS instances, for simplicity and fair comparison. Furthermore, we also test the proposed ASEM on CUTEst library [7]. All experiments were run on a Macbook Pro M1 laptop. For more experimental details, please refer to Appendix E. Experiment 1. The distribution for eigenvalues of the matrix A. In the error analysis (Theorem 1 and Theorem 2), the error is controlled by the approximation of µ to unobserved eigenvalues { i}ni=m+1, i.e., maxm+1in | i µ| and maxm+1in( i µ)2. It further implies that the distribution of eigenvalues { i}ni=m+1 is essential for the proposed method. Intuitively, if eigenvalues { i}ni=m+1 cluster around a small interval, then the rough estimate (7) for µ is enough to approximate the unknown eigenvalues well. Conversely, if eigenvalues { i}ni=m+1 spread across a large interval, then we cannot expect a single µ to estimate all eigenvalues { i}ni=m+1. Here, we have four specially designed cases for distributions of eigenvalues of the matrix A to illustrate our theoretical observations for the proposed method. Case 1 (evenly spaced): all eigenvalues { i}ni=1 are evenly spaced in [ 1, 1]; Case 2 (separated): half of eigenvalues are far away from the remaining, i.e., eigenvalues { i} n/2 i=1 are evenly spaced in [ 1, 4/5] and the remaining eigenvalues { i} n i=n/2+1 are evenly spaced in [4/5, 1]; Case 3 (right centered): the minimal 2% of eigenvalues and the remaining 98% of eigenvalues gather together respectively, i.e., eigenvalues { i} n/50 i=1 and { i} n i=n/50+1 are spaced evenly in [ 1, 4/5] and [4/5, 1] respectively; Case 4 (left centered): the maximal 2% of eigenvalues and the remaining 98% of eigenvalues gather together respectively, i.e., eigenvalues { i} 49/50n i=1 and { i}ni=49/50n+1 are evenly spaced in [ 1, 4/5] and [4/5, 1] respectively. The vector b is proportional to [1, · · · , 1]T with kbk = 0.1. The remaining parameters are n = 5⇥ 103 and ⇢ = 0.1. Here we adopt the first-order ASEM (i.e., µ is defined in (7)). Figure 1 validates our theories that the proposed algorithm converges fast if unknown eigenvalues { i}ni=m+1 are close. Moreover, without the need to compute all eigenvalues, partial eigen information is enough to achieve satisfactory solutions in practice, except for the hard case (e.g., Case 4). Experiment 2. The effect of the parameter µ (first-order and second-order ASEMs). For the first- and second-order ASEMs, we define µ according to (7) and (10) respectively. Here, we test the effect of µ for the proposed method in solving the cubic regularized quadratic problem, with other parameters fixed. Case 1 (first-order ASEM): µ is adopted as the mean value of unknown eigenvalues, defined in (7); Case 2 (second-order ASEM): µ is selected as the weighted average of unknown eigenvalues with weights c2i (see (10)); Case 3 (first-order ASEM): µ = m, the maximal eigenvalue we observe; Case 4: µ = 106, much larger than the eigenvalues of A, as an approximation to +1. The vector b is proportional to [ 1, · · · , n]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. The remaining parameters are d = 5 ⇥ 103 and ⇢ = 0.1. Figure 2 shows the superiority of the second-order ASEM over the first-order ASEM that it is more stable and converges faster when m is large, consistent with Theorem 1 and Theorem 2. Moreover, the results further imply the importance of the choice of µ. There are several observations from Figure 2. Firstly, we cannot discard the residual term with the unknown eigenvalues { i}ni=m+1, where they still contain much information, as is shown in Case 1 and Case 4. Secondly, the random selection of µ does not work well and may even cause divergence (e.g., see Case 3 and Case 4). Furthermore, a suitable choice of µ leads to a well-behaved algorithm (e.g., see Case 1 and Case 2). Experiment 3. Approximation capabilities of the Krylov subspace method for ASEM. As is introduced in Section 4, we adopt m-dimensional Krylov subspace Km(B,u1) to approximately calculate m algebraically smallest eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm}. We now investigate the performance of ASEM with estimated eigenvalues and eigenvectors. The vector b is proportional to the vector [1, · · · , 1]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. The remaining parameters are n = 5⇥ 103 and ⇢ = 0.1. We adopt the first-order ASEM (i.e., µ is defined in (7)) here. Trajectories for suboptimality with exact and approximated eigenvalues and eigenvectors are shown in Figure 3. The Krylov subspace Km(B,u1) with relatively low dimension m for ASEM matches well with ASEM with exact eigenvalues. This experiment justifies the use of the m-dimensional Krylov subspace for m eigenvalues and eigenvectors in ASEM. Experiment 4. Comparison of ASEM with the Krylov subspace method [3, 2] and the gradient descent method [1] on synthetic problems. For large-scale problems, the Krylov subspace method and the gradient descent method are two state-of-the-art methods for CRS (1). In this experiment, we compare the proposed ASEM against the Krylov subspace method and the gradient descent method. The dominant computation steps for these three methods are Hessian-vector products (O(mn2)). The vector b is proportional to the vector [1, · · · , 1]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. Similar to the setting in [2], we define the condition number for (1) as = n+⇢·kx ⇤k 1+⇢·kx⇤k = n+ ⇤ 1+ ⇤ . Then, we have ⇤ = n · 1 1 and ⇢ = ⇤ k(A+ ⇤I) 1bk . Case 1: easy case that = 103; Case 2: harder case that = 106. The remaining parameter is n = 5⇥ 103. We adopt the first-order ASEM (i.e., µ is defined in (7)). As shown in Figure 4, ASEM outperforms both the gradient descent method and the Krylov subspace method when m is relatively large. It is reasonable that ASEM underperforms when m is small since the m-dimensional Krylov subspace cannot well approximate eigenvalues and eigenvectors of A. The results further demonstrate the performance of the proposed ASEM method. Experiment 5. Comparison of ASEM with the Cauchy point method [5], the gradient descent method and the Krylov subspace method on the CUTEst problems [7]. The CUTEst library collects various unconstrained and constrained optimization problems that arise in real applications. In this part, we compare the numerical performances of the ARC algorithm [3] on four unconstrained optimization problems from the CUTEst library, where subproblems are solved by the Cauchy point method (ARCCP), the gradient descent method (ARC-GD), the Krylov subspace method (ARC-Krylov(k), where k is the number of Lanczos basis vectors) and the ASEM method (ARC-ASEM(m), where m is the number of eigenvalues for ASEM). The architecture of the ARC algorithm is provided in Appendix E.2. Four unconstrained optimization problems (e.g., TOINTGSS, BRYBAND, DIXMAANG, and TQUARTIC) in the CUTEst library are adopted for testing, where the dimensions are 1000, 2000, 3000, and 5000, respectively. We use the first-order ASEM (µ is defined in (7)) here since we found that the performances of the first-order ASEM and the second-order ASEM do not differ much for these problems. Numerical results are reported in Table 1, where xout, krf(xout)k, 1(r2f(xout)), iter and time represent the output of the ARC algorithm, the suboptimality (gradient norm), the minimal eigenvalue of the Hessian matrix, number of iterations for the ARC and CPU time, respectively. Here are several observations. Firstly, the proposed ASEM algorithm outperforms others in most cases and is comparable to the Krylov subspace method sometimes, where the ASEM achieves a worse suboptimality (gradient norms) or CPU time. Furthermore, only one eigenvalue is enough for the ASEM to perform well (i.e., ARC-ASEM(m) with m = 1), which is surprising. For more experimental details, please refer to Appendix E. 6 Conclusion We develop the first-order and the second-order truncated secular equations as surrogates to the secular equation with full eigendecomposition in solving the CRS (1). The proposed ASEM is an efficient alternative to existing methods for solving CRS since it reduces the computational cost from O(n3) to O(mn2). Our CRS solvers feature rigorous theoretical error bound, which is related to the amount of eigen information used. We also discuss in detail the implementation of our proposed algorithm ASEM. In particular, we show how only Hessian-vector products are needed, but not matrix inversion. Numerical experiments are conducted to further investigate the properties and performance of the proposed ASEM and corroborate with the theoretical results. From our experiments, we find that the proposed ASEM is more efficient than the state-of-the-art methods on synthetic and CUTEst problems. Acknowledgement We would like to thank the anonymous reviewers and chairs for their helpful comments. Michael K. Ng is supported by Hong Kong Research Grant Council GRF 12300218, 12300519, 17201020, 17300021, C1013-21GF, C7004-21GF and Joint NSFC-RGC N-HKU76921. Man-Chung Yue is supported by the Research Grants Council (RGC) of Hong Kong under the GRF project 15305321.
1. What is the focus and contribution of the paper regarding the cubic regularization method? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of computational time reduction and theoretical analysis? 3. Do you have any questions or concerns regarding the secular equation solution, its approximation, and the theoretical bounds provided by the authors? 4. How does the reviewer assess the clarity and quality of the paper's content, including the proofs and mathematical rigor? 5. Are there any limitations or potential improvements in the paper's approach, especially regarding the theoretical bound and error analysis?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In the cubic regularization method, a key step is to solve the so called secular equation which may costs O(n^3) time. In this paper, the authors consider two faster approximation to the secular equation which reduce the computing time to O(n^2 m). The authors give some theoretical analysis of the proposed methods. Strengths And Weaknesses The proposed method reduces the computing time of secular equation, which may be novel. The paper is relatively well-written and clear. This paper is mostly a theoretical paper. In my opinion, however, the theoretical results are not significant enough. In fact, from the authors' theoretical results (Proposition 3 for example), the proposed method has an irreducible error. From the authors' experiments, it may be the case that the obtained theoretical bound is too loose. Questions line 41: Proposition 2. Is this proposition adapted from Claim 2.1 in [1]? Why A + rho ||x^*|| > 0? Can you provide a proof of Proposition 2? line 45: "and hence the gradient norm serves as an optimality measure" I do not understand this. line 49: "lambda_1 + simga > 0" Should it be >= 0? line 90: "for any mu". Should it be "for any mu > lambda_m"? line 109: What does the symbol big O mean in your context? From line 113, it seems tht ||tuilde x - x*|| has no absolute upper bound, for example, suppose |lmabda_i| and simga* are very small. line 122 -- 126: These discussions lack mathematical rigor. From the theoretical view, is it possible to improve the bound? Limitations Yes
NIPS
Title Approximate Secular Equations for the Cubic Regularization Subproblem Abstract The cubic regularization method (CR) is a popular algorithm for unconstrained non-convex optimization. At each iteration, CR solves a cubically regularized quadratic problem, called the cubic regularization subproblem (CRS). One way to solve the CRS relies on solving the secular equation, whose computational bottleneck lies in the computation of all eigenvalues of the Hessian matrix. In this paper, we propose and analyze a novel CRS solver based on an approximate secular equation, which requires only some of the Hessian eigenvalues and is therefore much more efficient. Two approximate secular equations (ASEs) are developed. For both ASEs, we first study the existence and uniqueness of their roots and then establish an upper bound on the gap between the root and that of the standard secular equation. Such an upper bound can in turn be used to bound the distance from the approximate CRS solution based ASEs to the true CRS solution, thus offering a theoretical guarantee for our CRS solver. A desirable feature of our CRS solver is that it requires only matrix-vector multiplication but not matrix inversion, which makes it particularly suitable for high-dimensional applications of unconstrained non-convex optimization, such as low-rank recovery and deep learning. Numerical experiments with synthetic and real data-sets are conducted to investigate the practical performance of the proposed CRS solver. Experimental results show that the proposed solver outperforms two state-of-the-art methods. 1 Introduction The cubic regularization method (CR) is a variant of Newton’s method proposed by Griewank [8], and later independently by Nesterov and Polyak [12], and Weiser et al. [16]. It gained significant attention over the last decade due to its attractive theoretical properties, such as convergence to second-order critical points[12] and quadratic convergence rate under mild assumptions [17]. Each iteration of CR solves a problem of the following form, called the cubic regularization subproblem (CRS): min x2Rn fA,b,⇢(x) := b Tx+ 1 2 xTAx+ ⇢ 3 kxk3, (1) where ⇢ > 0 is the regularization parameter, b 2 Rn, and A 2 Rn⇥n is a symmetric matrix, not necessarily positive semidefinite. Many variants and generalizations of CR are developed, including 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the Adaptive Regularization Using Cubics (ARC) which allows for a dynamic choice of ⇢ and inexact CRS solutions [3, 4], accelerated CR using momentum [15] and stochastic CR for solving stochastic optimization [14]. Despite the theoretical success, the practicality of CR and its variants relies critically on the CRS solver, a topic that attracts considerable research recently [2, 1, 10, 9]. The goal of this paper is to develop a novel, efficient CRS solver along with theoretical guarantees. A popular approach for solving the CRS is via solving the so-called secular equation. We now review this approach. Towards that, we denote by 1 · · · n the eigenvalues of A and by v1, · · · ,vn the corresponding eigenvectors. In other words, we have the eigendecomposition A = Pn i=1 iviv T i = V⇤V T, where ⇤ = diag( 1, . . . , n) and V = [v1, · · · ,vn]. Note that eigenvalues i are not necessarily positive due to the indefiniteness of the matrix A. Also, we denote the Euclidean norm by k · k. Proposition 1 ([12, 3]). A vector x⇤ solves the CRS (1) if and only if it satisfies the system ⇢ (A+ ⇢kx⇤kI)x⇤ + b = 0, A+ ⇢kx⇤kI ⌫ 0. (2) (3) Moreover, if A+ ⇢kx⇤kI 0, then x⇤ is the unique solution (and hence a critical point). Proposition 2 ([1]). Let x⇤ be a global solution of CRS (1) and the eigendecomposition for A = n i=1 iviv T i = V⇤V T , where ⇤ = diag( 1, . . . , n) and V = [v1, · · · ,vn]. If bTv1 6= 0, then A + ⇢kx⇤kI 0 and the solution x⇤ is the unique critical point (and hence the unique solution). Conversely, if bTv1 = 0, then the CRS (1) has multiple optimal solutions. From Proposition 2, if bTv1 6= 0, then there is only one critical point, which is also the optimal solution, and hence the gradient norm krfA,b,⇢(x)k serves as an optimality measure. Throughout the paper, we assume bTv1 6= 0, under which the CRS is said to be in the easy case. This is without much loss of generality as this holds generically true in practice. Moreover, we could easily avoid the hard case (bTv1 = 0) by slightly perturbing the vector b, see [12, 2]. To introduce the secular equation, note that in the easy case, conditions (2) and (3) can be written as ⇢ (⇤+ I) · y⇤ = c, 1 + > 0. where =: ⇢kx⇤k, [y⇤1 , · · · , y⇤n]T := y⇤ = VTx⇤ and [c1, · · · , cn]T := c = VTb. Therefore, y⇤i = ci i + , i = 1, . . . , n. Since the Euclidean norm is invariant to orthogonal transformation, we have 2 ⇢2 = kx⇤k2 = ky⇤k2 = nX i=1 c2i ( i + )2 . Consequently, instead of solving the complicated nonlinear system (2)-(3), we could solve the CRS (1) by first finding the (unique) root > max{ 1, 0} of the equation w( ) = nX i=1 c2i ( i + )2 2 ⇢2 , (4) called the secular equation, and then solves the linear system (A+ I)x = b. The first step can be done efficiently by using existing root-finding algorithms (e.g., the bisection method and Newton’s method etc.). The disadvantage of the above CRS solver, based on the secular equation (4), is that it requires the full spectrum of A, which costs O(n3). This approach is viable only for low- to moderate-dimensional problems. However, when n is large, computing all eigenvalues of A is prohibitive. Worse still, after the root is solved, we still need to apply iterative methods (e.g., Lanczos method) to solve the large-scale linear system (2). We are thus motivated to approximate the secular equation by using only some of the eigenvalues of A, as opposed to all. As our main contribution, we developed two different approximate secular equations (ASEs), both of which require computing m < n eigenvalues of A. The cost for forming the approximate secular equations is only O(mn2), and hence the resulting CRS solver is much more efficient and scalable. On the theoretical side, for each of the proposed approximate secular equations, we first studied the existence and uniqueness of its root, and then derived an upper bound on the gap between the root and that of the standard secular equation (4). This upper bound is in turn used to bound the distance from the approximate CRS solution based ASEs to the true CRS solution, thus offering a theoretical guarantee for the proposed CRS solver. A desirable feature of our CRS solver is that it requires only matrix-vector multiplication but not matrix inversion, which makes it particularly suitable for high-dimensional applications of unconstrained non-convex optimization, such as low-rank recovery and deep learning. On the empirical side, we conducted experiments with both synthetic and real problem instances to investigate the practical performance of the proposed CRS solver and the associated CR. Experimental results showed that the proposed solver outperforms two state-of-the-art methods. The selection of m for the proposed ASEM is an interesting and crucial topic. We will discuss related issues in Section 4 and some numerical explorations are also presented in Section 5. 2 The First-Order Truncated Secular Equation We define the first-order truncated secular equation by w1( ;µ) = mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 2 ⇢2 , (5) where µ m is an input parameter that approximates the unobserved eigenvalues m+1, · · · , n, ci = bTvi and Pn i=m+1 c 2 i = kbk 2 Pm i=1 c 2 i . Note that only m eigenvalues 1, · · · , m and their corresponding eigenvectors v1, · · · ,vm are needed to form (5), which is computationally friendlier compared with (4). The name first-order truncated secular equation comes from the fact that w1( , µ) is the first-order Taylor approximation to the function w( ). Below we will first study the existence and uniqueness for the root of (5). Then, we derive an error bound for the root. 2.1 Existence and Uniqueness for the Root In the easy case that bTv1 6= 0 (equivalently, c1 6= 0), the solution x⇤ to the CRS (1) is unique, which implies the existence and uniqueness for the root ⇤ of (4). To show that our proposed CRS solver is also well-defined, we prove the existence and uniqueness of the root of the first-order truncated secular equation (5). Lemma 1. For any µ m, the function w1(·;µ) as defined in (5) admits a unique root. Proof. Existence. We first consider the case when 1 0. Then, for any fixed µ m, lim !( 1)+ w1( ;µ) = +1 and lim !+1 w1( ;µ) = 1, By the intermediate value theorem, w1(·;µ) has a root in ( 1,+1). For 1 > 0, we have w1(0;µ) > 0 and lim !+1 w1( ;µ) = 1. Therefore, w1( ;µ) has a root in (0,+1). Uniqueness. Note that w1( ;µ) is monotonically decreasing for 2 ( 1,+1) and 2 (0,+1) when 1 0 and 1 > 0, respectively. Therefore, the uniqueness of the root for w1( ;µ) is guaranteed. 2.2 Error Analysis In order to study the quality of the CRS solution based on our proposed solver using approximate secular equations, we need to study the quality of the root to the first-order truncated secular equation, denoted by ⇤1 . Towards that end, we provide an upper bound on the gap | ⇤1 ⇤| between ⇤1 and the root ⇤ of the exact secular equation (4). Theorem 1. Let ⇤1 and ⇤ be the unique roots of w1( ;µ) and w( ), respectively. Then | ⇤1 ⇤ | Cm · max m+1in | i µ|, (6) where Cm > 0 is a constant, upper bounded by 2kbk2 ( m 1)3 · min n ( d+B1) 3 2kbk2 , ⇢2 2B1 o with B1 = 1+ p 21+4⇢·kbk 2 being an upper bound for | ⇤ 1 |. We clearly see that the right-hand side of inequality (1) is decreasing in m. This confirms that using more eigen information (i.e., larger m) helps to reduce the error | ⇤1 ⇤|. The proof of Theorem 1 is technical and quite long and hence relegated to Appendix A. The approximation quality of our CRS solver is guaranteed by combining Theorem 1 with the following proposition. Proposition 3. Let x⇤ and x̃ be solutions to the equations (A+ ⇤I)x⇤ = b and (A+ ⇤1I) x̃ = b, respectively. Then, kx̃ x⇤k = O (| ⇤1 ⇤ |). Proof. By definition, we have x⇤ = nX i=1 ( i + ⇤) 1 viv T i · ( b) = nX i=1 ( i + ⇤) 1 ci · vi, and x̃ = nX i=1 ( i + ⇤ 1) 1 viv T i · ( b) = nX i=1 ( i + ⇤ 1) 1 ci · vi, then kx̃ x⇤k = nX i=1 ⇣ ( i + ⇤ 1) 1 ( i + ⇤) 1 ⌘ viv T i · ( b) ✓ max 1in n ( i + ⇤ 1) 1 ( i + ⇤) 1 o◆ · kbk = O (| ⇤1 ⇤ |) . This completes the proof. Before ending this section, some remarks are in order. First, the parameter µ acts as an approximation to n m unknown eigenvalues m+1, · · · , n. An intuitive choice of µ that works well in practice and is computationally cheap is the average of unknown eigenvalues, i.e., µ1 = Pn i=m+1 i n m = tr(A) Pm i=1 i n m . (7) Second, the error bound Cm ·maxm+1in | i µ| in Theorem 1 depends on the distribution of eigenvalues of A. If the unobserved eigenvalues m+1, · · · , n cluster around a small interval, then with a suitable choice of µ 2 [ m+1, n], maxm+1in | i µ| is small. Conversely, if the unknown eigenvalues spread over a large interval, then it is hard to make the error maxm+1in | i µ| small. Third, it is instructive to study the error bound (6) under some random matrix model for A. Suppose that A = eA/ p 2n, where eA is a symmetric random matrix with i.i.d. entries on and above the diagonal. By the Wigner semicircle law [6], as n ! 1, the eigenvalues of A distribute according to a density of a semi-circle shape. In particular, we can deduce that with a probability of 1 o(1), max m+1in | i µ| O ✓ 1 m+ 1 n ◆2/3! ⇡ ✓ 3⇡ 4 p 2 ◆2/3 · ✓ 1 m+ 1 n ◆2/3 (8) The detailed proof of (8) and further discussions under random A can be found in Appendix C. 3 The Second-Order Truncated Secular Equation Similarly to the equation (5), but with the second-order Taylor approximation, we define the secondorder truncated secular equation by w2( ;µ) = mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 2 nX i=m+1 c2i · ( i µ) (µ+ )3 2 ⇢2 , (9) where µ m is an input parameter that approximates the unobserved eigenvalues m+1, · · · , n. 3.1 Existence and Uniqueness for the Root The lemma blew shows the existence and uniqueness of the root of w2(·;µ). Lemma 2. With µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i , (10) the function w2(·;µ) as defined in (9) admits a unique root. Proof. When µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i , the third summation in the definition (9) vanishes, and hence w2( , µ) becomes the same as w1( , µ), except with a specific choice of µ. The desired conclusion then follows from Lemma 1. Unlike its first-order counterpart, we do not develop the existence and uniqueness of the root of the second-order truncated secular equation for arbitrary µ. The reason is that when 1 > 0, w2(0;µ) can potentially be positive or negative. 3.2 Error Analysis Similar to that for the first-order truncated secular equation, we can also derive an error bound for the root of the second-order truncated secular equation. Theorem 2. Let ⇤2 and ⇤ be the unique root of w2( ;µ) and w( ), respectively, and µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i . Then, | ⇤2 ⇤ | Cm · max m+1in ( i µ) 2, (11) where Cm > 0 is a constant bounded by 3kbk2 ( m 1)4 · min n ( n+B1) 3 2kbk2 , ⇢2 2B1 o with B1 = 1+ p 21+4⇢·kbk 2 being an upper bound for | ⇤ 2 |. The proof of Theorem 2 can be found in Appendix B. We can similarly estimate the approximation quality by combining Theorem 2 and Proposition 3. Again, the right-hand side of the error bound (11) is decreasing in m. We should also point out that the CRS solver based on the second-order secular equation outperforms the first-order counterpart only if maxm+1in | i µ|/| m 1| is small enough. The computation of µ here requires cm+1, · · · , cn, which seem to be inaccessible. We provide a tractable form for µ in (13) and will discuss it in the next part. 4 Implementation Details We now discuss the implementation details for solving the proposed first-order secular equation for CRS. First, we obtain the partial eigen information { 1, · · · , m} and {v1, · · · ,vm} by Krylov subspace methods. Note that only Hessian-vector products are required. This is computationally friendlier than other methods that rely on matrix inversions and is particularly suitable for modern, high-dimensional applications. Then, we solve the first-order secular equation (5) with µ defined in (7) or (10), using any root-finding algorithm, such as Newton’s method. Finally, we solve the linear system (A+ ⇤I)x = b by iterative algorithms, e.g., the Lanczos method and the conjugate gradient method. The resulting CRS solver, namely the approximate secular equation method (ASEM), is summarized as follows: Step 1: obtaining the partial eigen information { 1, · · · , m} and {v1, · · · ,vm} of A. Step 2: solving the secular equation (5) with µ defined in (7) or (10); we get ⇤. Step 3: iteratively solving the linear system (A+ ⇤I)x+ b = 0. Output: the solution x. Details for Step 1. The Krylov subspace is one of the most popular iterative methods in solving eigen problems with O(mn2) computational cost [13]. The Lanczos decomposition for a real symmetric matrix B satisfies BUk = TkUk + kuk+1e T k , where Uk 2 Rn⇥k is an orthonormal matrix (i.e., UTkUk = Ik), Tk 2 Rk⇥k is a symmetric tridiagonal matrix and ek 2 Rk is the k-th standard basis vector in Rk. Lanczos observed that even for comparatively small k, Tk approximates B very well in terms of eigenvalues and eigenvectors. Specifically, for a suitable eigenpair ( ,w) of Tk with Tkw = · w, the pair ( ,Ukw) is an approximate eigenpair of B, i.e., Bz ⇡ ·z with z = Ukw. Here, the Krylov subspace is constructed by u1, the first column of Uk = [u1, · · · ,uk], i.e., Kk(B,u1). Note that Tk approximates B for eigenvalues with largest modulus (or absolute values) and the corresponding eigenvectors. Empirically, for calculating m eigenvalues of B with largest absolute values and the corresponding eigenvectors, we usually construct the Krylov subspace Kk(B,u1) with dimension k = max{2m, 20}. The base vector u1 is also essential for the Krylov subspace method. Moreover, restarting is adopted to iteratively update the base vector u1. To the best of our knowledge, the mentioned iterative method for partial eigen information is supported in many softwares, e.g., Matlab (eigs function) and Python (Scipy package) etc. For more details, please refer to ARPACK [11]. Returning back to the proposed algorithm with A, instead of calculating the largest (in terms of the absolute value) m eigenvalues of A, we aim to get m (algebraically) smallest eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm}. We first roughly calculate a shift value kAk by several steps of power iteration (Hessian-vector products). Then, let B = ·I A, whose eigenvalues { n, · · · , 1} are non-negative and the corresponding eigenvectors are {vn, · · · ,v1}. Applying the mentioned Krylov subspace algorithm, we first obtain an estimate of shifted eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm} for A, since { m, · · · , 1} are largest eigenvalues of B. To further lower the computational cost, we may adopt k-dimensional Krylov subspace Kk(B,u1) for m eigenvalues without restarting in implementation with k = m. Details for Step 2. Instead of directly solving (5), Cartis et al. [3] recommended to find the root for the equivalent equation: w̃1( ;µ) = vuut mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 ⇢ , (12) which is convex on ( 1,+1). Moreover, under perfect initialization, Newton’s method is proved to achieve (locally) quadratic convergence. However, we numerically find that it depends much on the initialization and may converge to a point outside the feasible domain ( 1,+1) if it has imperfect initialization. Here, we recommend to use the bisection method to find the root of (5) or (12) due to its linear convergence, stability and ease of implementation. For the weighted average µ defined in (10), we can rewrite it as a more tractable but equivalent form: µ2 = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i = bT(A Vm⇤mVTm)b kbk2 Pm i=1 c 2 i = bTAb Pm i=1 c 2 i · i kbk2 Pm i=1 c 2 i , (13) where Vm = [v1, · · · ,vm] and ⇤m = diag( 1, · · · , m). Details for Step 3. There are many well-studied, efficient and reliable iterative methods for (real symmetric) linear systems, e.g., Krylov subspace (Lanczos) methods and conjugate gradient methods etc. We adopt the Lanczos method for solving the linear system (A+ ⇤I)x+ b = 0, where only a few steps of Hessian-vector products are required. In summary, the main computational cost comes from Step 1 and Step 3 for Hessian-vector products (O(mn2)), since solving the root of w1( ;µ) is a 1-dimensional problem in Step 2 and is of cost O(n). Therefore, the total computational cost for the proposed algorithm is O(mn2), much lower than the method based on full eigendecomposition (O(n3)). The selection of m. The choice of the parameter m is important to our CRS solver: a larger m yields a better CRS solution quality but incurs a higher computational cost. If A is a Gaussian random matrix, by (8), we can achieve "-accuracy (i.e., | ⇤1 ⇤| ") if m n satisfies ✓ 3⇡ 4 p 2 ◆2/3 · ✓ 1 m+ 1 n ◆2/3 ". However, the error bound (8) provides only a conservative sufficient conditions for m. Moreover, for general problems without the Gaussian assumption on A, it is hard to choose m based on the "-accuracy, because the error bounds (6) and (11) are implicit in m. Therefore, adaptive methods (or heuristic methods) for selecting m are necessary in practice. A natural way is to check the suboptimality (gradient norm) in each step, and enlarge m by m0 (i.e., we set m = max{m + m0,mmax}, where mmax is the maximal number of eigenvalues we adopt in ASEM), if the output does not satisfy the given condition for suboptimality. Moreover, numerical experiments on CUTEst problems (see Experiment 6 in Section 5) shows that m = 1 is enough for most of the cases. We left the study for the selection of m as future work. To the best of our knowledge, the Krylov subspace method [3, 2] for CRS suffers from a similar issue of hyperparameter selection. 5 Experimental Results Without the loss of generality (see Appendix D), we assume that A is diagonal in the synthetic CRS instances, for simplicity and fair comparison. Furthermore, we also test the proposed ASEM on CUTEst library [7]. All experiments were run on a Macbook Pro M1 laptop. For more experimental details, please refer to Appendix E. Experiment 1. The distribution for eigenvalues of the matrix A. In the error analysis (Theorem 1 and Theorem 2), the error is controlled by the approximation of µ to unobserved eigenvalues { i}ni=m+1, i.e., maxm+1in | i µ| and maxm+1in( i µ)2. It further implies that the distribution of eigenvalues { i}ni=m+1 is essential for the proposed method. Intuitively, if eigenvalues { i}ni=m+1 cluster around a small interval, then the rough estimate (7) for µ is enough to approximate the unknown eigenvalues well. Conversely, if eigenvalues { i}ni=m+1 spread across a large interval, then we cannot expect a single µ to estimate all eigenvalues { i}ni=m+1. Here, we have four specially designed cases for distributions of eigenvalues of the matrix A to illustrate our theoretical observations for the proposed method. Case 1 (evenly spaced): all eigenvalues { i}ni=1 are evenly spaced in [ 1, 1]; Case 2 (separated): half of eigenvalues are far away from the remaining, i.e., eigenvalues { i} n/2 i=1 are evenly spaced in [ 1, 4/5] and the remaining eigenvalues { i} n i=n/2+1 are evenly spaced in [4/5, 1]; Case 3 (right centered): the minimal 2% of eigenvalues and the remaining 98% of eigenvalues gather together respectively, i.e., eigenvalues { i} n/50 i=1 and { i} n i=n/50+1 are spaced evenly in [ 1, 4/5] and [4/5, 1] respectively; Case 4 (left centered): the maximal 2% of eigenvalues and the remaining 98% of eigenvalues gather together respectively, i.e., eigenvalues { i} 49/50n i=1 and { i}ni=49/50n+1 are evenly spaced in [ 1, 4/5] and [4/5, 1] respectively. The vector b is proportional to [1, · · · , 1]T with kbk = 0.1. The remaining parameters are n = 5⇥ 103 and ⇢ = 0.1. Here we adopt the first-order ASEM (i.e., µ is defined in (7)). Figure 1 validates our theories that the proposed algorithm converges fast if unknown eigenvalues { i}ni=m+1 are close. Moreover, without the need to compute all eigenvalues, partial eigen information is enough to achieve satisfactory solutions in practice, except for the hard case (e.g., Case 4). Experiment 2. The effect of the parameter µ (first-order and second-order ASEMs). For the first- and second-order ASEMs, we define µ according to (7) and (10) respectively. Here, we test the effect of µ for the proposed method in solving the cubic regularized quadratic problem, with other parameters fixed. Case 1 (first-order ASEM): µ is adopted as the mean value of unknown eigenvalues, defined in (7); Case 2 (second-order ASEM): µ is selected as the weighted average of unknown eigenvalues with weights c2i (see (10)); Case 3 (first-order ASEM): µ = m, the maximal eigenvalue we observe; Case 4: µ = 106, much larger than the eigenvalues of A, as an approximation to +1. The vector b is proportional to [ 1, · · · , n]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. The remaining parameters are d = 5 ⇥ 103 and ⇢ = 0.1. Figure 2 shows the superiority of the second-order ASEM over the first-order ASEM that it is more stable and converges faster when m is large, consistent with Theorem 1 and Theorem 2. Moreover, the results further imply the importance of the choice of µ. There are several observations from Figure 2. Firstly, we cannot discard the residual term with the unknown eigenvalues { i}ni=m+1, where they still contain much information, as is shown in Case 1 and Case 4. Secondly, the random selection of µ does not work well and may even cause divergence (e.g., see Case 3 and Case 4). Furthermore, a suitable choice of µ leads to a well-behaved algorithm (e.g., see Case 1 and Case 2). Experiment 3. Approximation capabilities of the Krylov subspace method for ASEM. As is introduced in Section 4, we adopt m-dimensional Krylov subspace Km(B,u1) to approximately calculate m algebraically smallest eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm}. We now investigate the performance of ASEM with estimated eigenvalues and eigenvectors. The vector b is proportional to the vector [1, · · · , 1]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. The remaining parameters are n = 5⇥ 103 and ⇢ = 0.1. We adopt the first-order ASEM (i.e., µ is defined in (7)) here. Trajectories for suboptimality with exact and approximated eigenvalues and eigenvectors are shown in Figure 3. The Krylov subspace Km(B,u1) with relatively low dimension m for ASEM matches well with ASEM with exact eigenvalues. This experiment justifies the use of the m-dimensional Krylov subspace for m eigenvalues and eigenvectors in ASEM. Experiment 4. Comparison of ASEM with the Krylov subspace method [3, 2] and the gradient descent method [1] on synthetic problems. For large-scale problems, the Krylov subspace method and the gradient descent method are two state-of-the-art methods for CRS (1). In this experiment, we compare the proposed ASEM against the Krylov subspace method and the gradient descent method. The dominant computation steps for these three methods are Hessian-vector products (O(mn2)). The vector b is proportional to the vector [1, · · · , 1]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. Similar to the setting in [2], we define the condition number for (1) as = n+⇢·kx ⇤k 1+⇢·kx⇤k = n+ ⇤ 1+ ⇤ . Then, we have ⇤ = n · 1 1 and ⇢ = ⇤ k(A+ ⇤I) 1bk . Case 1: easy case that = 103; Case 2: harder case that = 106. The remaining parameter is n = 5⇥ 103. We adopt the first-order ASEM (i.e., µ is defined in (7)). As shown in Figure 4, ASEM outperforms both the gradient descent method and the Krylov subspace method when m is relatively large. It is reasonable that ASEM underperforms when m is small since the m-dimensional Krylov subspace cannot well approximate eigenvalues and eigenvectors of A. The results further demonstrate the performance of the proposed ASEM method. Experiment 5. Comparison of ASEM with the Cauchy point method [5], the gradient descent method and the Krylov subspace method on the CUTEst problems [7]. The CUTEst library collects various unconstrained and constrained optimization problems that arise in real applications. In this part, we compare the numerical performances of the ARC algorithm [3] on four unconstrained optimization problems from the CUTEst library, where subproblems are solved by the Cauchy point method (ARCCP), the gradient descent method (ARC-GD), the Krylov subspace method (ARC-Krylov(k), where k is the number of Lanczos basis vectors) and the ASEM method (ARC-ASEM(m), where m is the number of eigenvalues for ASEM). The architecture of the ARC algorithm is provided in Appendix E.2. Four unconstrained optimization problems (e.g., TOINTGSS, BRYBAND, DIXMAANG, and TQUARTIC) in the CUTEst library are adopted for testing, where the dimensions are 1000, 2000, 3000, and 5000, respectively. We use the first-order ASEM (µ is defined in (7)) here since we found that the performances of the first-order ASEM and the second-order ASEM do not differ much for these problems. Numerical results are reported in Table 1, where xout, krf(xout)k, 1(r2f(xout)), iter and time represent the output of the ARC algorithm, the suboptimality (gradient norm), the minimal eigenvalue of the Hessian matrix, number of iterations for the ARC and CPU time, respectively. Here are several observations. Firstly, the proposed ASEM algorithm outperforms others in most cases and is comparable to the Krylov subspace method sometimes, where the ASEM achieves a worse suboptimality (gradient norms) or CPU time. Furthermore, only one eigenvalue is enough for the ASEM to perform well (i.e., ARC-ASEM(m) with m = 1), which is surprising. For more experimental details, please refer to Appendix E. 6 Conclusion We develop the first-order and the second-order truncated secular equations as surrogates to the secular equation with full eigendecomposition in solving the CRS (1). The proposed ASEM is an efficient alternative to existing methods for solving CRS since it reduces the computational cost from O(n3) to O(mn2). Our CRS solvers feature rigorous theoretical error bound, which is related to the amount of eigen information used. We also discuss in detail the implementation of our proposed algorithm ASEM. In particular, we show how only Hessian-vector products are needed, but not matrix inversion. Numerical experiments are conducted to further investigate the properties and performance of the proposed ASEM and corroborate with the theoretical results. From our experiments, we find that the proposed ASEM is more efficient than the state-of-the-art methods on synthetic and CUTEst problems. Acknowledgement We would like to thank the anonymous reviewers and chairs for their helpful comments. Michael K. Ng is supported by Hong Kong Research Grant Council GRF 12300218, 12300519, 17201020, 17300021, C1013-21GF, C7004-21GF and Joint NSFC-RGC N-HKU76921. Man-Chung Yue is supported by the Research Grants Council (RGC) of Hong Kong under the GRF project 15305321.
1. What is the focus and contribution of the paper regarding cubic regularization? 2. What are the strengths and weaknesses of the proposed approach, particularly in its implementation and comparisons with other works? 3. Do you have any questions or concerns about the truncation-based approximation method used in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or potential applications of the paper that the reviewer wants to highlight?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper studies the cubic regularization technique and proposes two methods for approximating the secular equation based on the first-order and second-order Taylar expansions. Theoretical discussions including uniqueness and existence of the solution together with error analysis are provided. Five numerical experiments are conducted with comparable performance with the state of the art. However, the trade-off between accuracy improvement and computational reduction for these truncation-based approaches is not clear. Overall, the paper looks interesting with a variety of applications but with limited novelty. Strengths And Weaknesses Strengths: The paper proposes to use simple Taylor approximations to approximate the secular equation, which intends to simplify the computation. Implementation details are provided, together with various numerical comparisons with the state of the art. Weaknesses: The Taylor expansion-based truncation seems intuitive and standard, which will bring inaccuracy for the approximation. It is not fully clear how this type of approximation will affect the accuracy and reduce the computational cost. The general high-order approximation could be discussed as well. In addition, in the numerical results, say Table 1, ARC-Krylov seems to outperform the proposed method in terms of running time. A comparison of computational complexity in the big-O notation could be given with theoretical discussions. Questions In line 212, what does the "computational waste" mean here? In Table 1, it may not be fair to simply compare the overall running time. The running time per iteration could be compared as well. In addition, does the number of iterations depend on the selection of certain parameters in all methods being compared? If yes, please discuss which parameters are sensitive and provide guidelines for tuning if available. In Section 5, it would be better to describe the computing platform and computer configurations at the very beginning of the section rather than in an unnoticeable place in Experiment 5. Limitations Regularization techniques have been widely used in solving a lot of application problems, such as inverse problems and machine learning. But the paper does not show any such application experiment, which would limit the practical use and attraction.
NIPS
Title Approximate Secular Equations for the Cubic Regularization Subproblem Abstract The cubic regularization method (CR) is a popular algorithm for unconstrained non-convex optimization. At each iteration, CR solves a cubically regularized quadratic problem, called the cubic regularization subproblem (CRS). One way to solve the CRS relies on solving the secular equation, whose computational bottleneck lies in the computation of all eigenvalues of the Hessian matrix. In this paper, we propose and analyze a novel CRS solver based on an approximate secular equation, which requires only some of the Hessian eigenvalues and is therefore much more efficient. Two approximate secular equations (ASEs) are developed. For both ASEs, we first study the existence and uniqueness of their roots and then establish an upper bound on the gap between the root and that of the standard secular equation. Such an upper bound can in turn be used to bound the distance from the approximate CRS solution based ASEs to the true CRS solution, thus offering a theoretical guarantee for our CRS solver. A desirable feature of our CRS solver is that it requires only matrix-vector multiplication but not matrix inversion, which makes it particularly suitable for high-dimensional applications of unconstrained non-convex optimization, such as low-rank recovery and deep learning. Numerical experiments with synthetic and real data-sets are conducted to investigate the practical performance of the proposed CRS solver. Experimental results show that the proposed solver outperforms two state-of-the-art methods. 1 Introduction The cubic regularization method (CR) is a variant of Newton’s method proposed by Griewank [8], and later independently by Nesterov and Polyak [12], and Weiser et al. [16]. It gained significant attention over the last decade due to its attractive theoretical properties, such as convergence to second-order critical points[12] and quadratic convergence rate under mild assumptions [17]. Each iteration of CR solves a problem of the following form, called the cubic regularization subproblem (CRS): min x2Rn fA,b,⇢(x) := b Tx+ 1 2 xTAx+ ⇢ 3 kxk3, (1) where ⇢ > 0 is the regularization parameter, b 2 Rn, and A 2 Rn⇥n is a symmetric matrix, not necessarily positive semidefinite. Many variants and generalizations of CR are developed, including 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the Adaptive Regularization Using Cubics (ARC) which allows for a dynamic choice of ⇢ and inexact CRS solutions [3, 4], accelerated CR using momentum [15] and stochastic CR for solving stochastic optimization [14]. Despite the theoretical success, the practicality of CR and its variants relies critically on the CRS solver, a topic that attracts considerable research recently [2, 1, 10, 9]. The goal of this paper is to develop a novel, efficient CRS solver along with theoretical guarantees. A popular approach for solving the CRS is via solving the so-called secular equation. We now review this approach. Towards that, we denote by 1 · · · n the eigenvalues of A and by v1, · · · ,vn the corresponding eigenvectors. In other words, we have the eigendecomposition A = Pn i=1 iviv T i = V⇤V T, where ⇤ = diag( 1, . . . , n) and V = [v1, · · · ,vn]. Note that eigenvalues i are not necessarily positive due to the indefiniteness of the matrix A. Also, we denote the Euclidean norm by k · k. Proposition 1 ([12, 3]). A vector x⇤ solves the CRS (1) if and only if it satisfies the system ⇢ (A+ ⇢kx⇤kI)x⇤ + b = 0, A+ ⇢kx⇤kI ⌫ 0. (2) (3) Moreover, if A+ ⇢kx⇤kI 0, then x⇤ is the unique solution (and hence a critical point). Proposition 2 ([1]). Let x⇤ be a global solution of CRS (1) and the eigendecomposition for A = n i=1 iviv T i = V⇤V T , where ⇤ = diag( 1, . . . , n) and V = [v1, · · · ,vn]. If bTv1 6= 0, then A + ⇢kx⇤kI 0 and the solution x⇤ is the unique critical point (and hence the unique solution). Conversely, if bTv1 = 0, then the CRS (1) has multiple optimal solutions. From Proposition 2, if bTv1 6= 0, then there is only one critical point, which is also the optimal solution, and hence the gradient norm krfA,b,⇢(x)k serves as an optimality measure. Throughout the paper, we assume bTv1 6= 0, under which the CRS is said to be in the easy case. This is without much loss of generality as this holds generically true in practice. Moreover, we could easily avoid the hard case (bTv1 = 0) by slightly perturbing the vector b, see [12, 2]. To introduce the secular equation, note that in the easy case, conditions (2) and (3) can be written as ⇢ (⇤+ I) · y⇤ = c, 1 + > 0. where =: ⇢kx⇤k, [y⇤1 , · · · , y⇤n]T := y⇤ = VTx⇤ and [c1, · · · , cn]T := c = VTb. Therefore, y⇤i = ci i + , i = 1, . . . , n. Since the Euclidean norm is invariant to orthogonal transformation, we have 2 ⇢2 = kx⇤k2 = ky⇤k2 = nX i=1 c2i ( i + )2 . Consequently, instead of solving the complicated nonlinear system (2)-(3), we could solve the CRS (1) by first finding the (unique) root > max{ 1, 0} of the equation w( ) = nX i=1 c2i ( i + )2 2 ⇢2 , (4) called the secular equation, and then solves the linear system (A+ I)x = b. The first step can be done efficiently by using existing root-finding algorithms (e.g., the bisection method and Newton’s method etc.). The disadvantage of the above CRS solver, based on the secular equation (4), is that it requires the full spectrum of A, which costs O(n3). This approach is viable only for low- to moderate-dimensional problems. However, when n is large, computing all eigenvalues of A is prohibitive. Worse still, after the root is solved, we still need to apply iterative methods (e.g., Lanczos method) to solve the large-scale linear system (2). We are thus motivated to approximate the secular equation by using only some of the eigenvalues of A, as opposed to all. As our main contribution, we developed two different approximate secular equations (ASEs), both of which require computing m < n eigenvalues of A. The cost for forming the approximate secular equations is only O(mn2), and hence the resulting CRS solver is much more efficient and scalable. On the theoretical side, for each of the proposed approximate secular equations, we first studied the existence and uniqueness of its root, and then derived an upper bound on the gap between the root and that of the standard secular equation (4). This upper bound is in turn used to bound the distance from the approximate CRS solution based ASEs to the true CRS solution, thus offering a theoretical guarantee for the proposed CRS solver. A desirable feature of our CRS solver is that it requires only matrix-vector multiplication but not matrix inversion, which makes it particularly suitable for high-dimensional applications of unconstrained non-convex optimization, such as low-rank recovery and deep learning. On the empirical side, we conducted experiments with both synthetic and real problem instances to investigate the practical performance of the proposed CRS solver and the associated CR. Experimental results showed that the proposed solver outperforms two state-of-the-art methods. The selection of m for the proposed ASEM is an interesting and crucial topic. We will discuss related issues in Section 4 and some numerical explorations are also presented in Section 5. 2 The First-Order Truncated Secular Equation We define the first-order truncated secular equation by w1( ;µ) = mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 2 ⇢2 , (5) where µ m is an input parameter that approximates the unobserved eigenvalues m+1, · · · , n, ci = bTvi and Pn i=m+1 c 2 i = kbk 2 Pm i=1 c 2 i . Note that only m eigenvalues 1, · · · , m and their corresponding eigenvectors v1, · · · ,vm are needed to form (5), which is computationally friendlier compared with (4). The name first-order truncated secular equation comes from the fact that w1( , µ) is the first-order Taylor approximation to the function w( ). Below we will first study the existence and uniqueness for the root of (5). Then, we derive an error bound for the root. 2.1 Existence and Uniqueness for the Root In the easy case that bTv1 6= 0 (equivalently, c1 6= 0), the solution x⇤ to the CRS (1) is unique, which implies the existence and uniqueness for the root ⇤ of (4). To show that our proposed CRS solver is also well-defined, we prove the existence and uniqueness of the root of the first-order truncated secular equation (5). Lemma 1. For any µ m, the function w1(·;µ) as defined in (5) admits a unique root. Proof. Existence. We first consider the case when 1 0. Then, for any fixed µ m, lim !( 1)+ w1( ;µ) = +1 and lim !+1 w1( ;µ) = 1, By the intermediate value theorem, w1(·;µ) has a root in ( 1,+1). For 1 > 0, we have w1(0;µ) > 0 and lim !+1 w1( ;µ) = 1. Therefore, w1( ;µ) has a root in (0,+1). Uniqueness. Note that w1( ;µ) is monotonically decreasing for 2 ( 1,+1) and 2 (0,+1) when 1 0 and 1 > 0, respectively. Therefore, the uniqueness of the root for w1( ;µ) is guaranteed. 2.2 Error Analysis In order to study the quality of the CRS solution based on our proposed solver using approximate secular equations, we need to study the quality of the root to the first-order truncated secular equation, denoted by ⇤1 . Towards that end, we provide an upper bound on the gap | ⇤1 ⇤| between ⇤1 and the root ⇤ of the exact secular equation (4). Theorem 1. Let ⇤1 and ⇤ be the unique roots of w1( ;µ) and w( ), respectively. Then | ⇤1 ⇤ | Cm · max m+1in | i µ|, (6) where Cm > 0 is a constant, upper bounded by 2kbk2 ( m 1)3 · min n ( d+B1) 3 2kbk2 , ⇢2 2B1 o with B1 = 1+ p 21+4⇢·kbk 2 being an upper bound for | ⇤ 1 |. We clearly see that the right-hand side of inequality (1) is decreasing in m. This confirms that using more eigen information (i.e., larger m) helps to reduce the error | ⇤1 ⇤|. The proof of Theorem 1 is technical and quite long and hence relegated to Appendix A. The approximation quality of our CRS solver is guaranteed by combining Theorem 1 with the following proposition. Proposition 3. Let x⇤ and x̃ be solutions to the equations (A+ ⇤I)x⇤ = b and (A+ ⇤1I) x̃ = b, respectively. Then, kx̃ x⇤k = O (| ⇤1 ⇤ |). Proof. By definition, we have x⇤ = nX i=1 ( i + ⇤) 1 viv T i · ( b) = nX i=1 ( i + ⇤) 1 ci · vi, and x̃ = nX i=1 ( i + ⇤ 1) 1 viv T i · ( b) = nX i=1 ( i + ⇤ 1) 1 ci · vi, then kx̃ x⇤k = nX i=1 ⇣ ( i + ⇤ 1) 1 ( i + ⇤) 1 ⌘ viv T i · ( b) ✓ max 1in n ( i + ⇤ 1) 1 ( i + ⇤) 1 o◆ · kbk = O (| ⇤1 ⇤ |) . This completes the proof. Before ending this section, some remarks are in order. First, the parameter µ acts as an approximation to n m unknown eigenvalues m+1, · · · , n. An intuitive choice of µ that works well in practice and is computationally cheap is the average of unknown eigenvalues, i.e., µ1 = Pn i=m+1 i n m = tr(A) Pm i=1 i n m . (7) Second, the error bound Cm ·maxm+1in | i µ| in Theorem 1 depends on the distribution of eigenvalues of A. If the unobserved eigenvalues m+1, · · · , n cluster around a small interval, then with a suitable choice of µ 2 [ m+1, n], maxm+1in | i µ| is small. Conversely, if the unknown eigenvalues spread over a large interval, then it is hard to make the error maxm+1in | i µ| small. Third, it is instructive to study the error bound (6) under some random matrix model for A. Suppose that A = eA/ p 2n, where eA is a symmetric random matrix with i.i.d. entries on and above the diagonal. By the Wigner semicircle law [6], as n ! 1, the eigenvalues of A distribute according to a density of a semi-circle shape. In particular, we can deduce that with a probability of 1 o(1), max m+1in | i µ| O ✓ 1 m+ 1 n ◆2/3! ⇡ ✓ 3⇡ 4 p 2 ◆2/3 · ✓ 1 m+ 1 n ◆2/3 (8) The detailed proof of (8) and further discussions under random A can be found in Appendix C. 3 The Second-Order Truncated Secular Equation Similarly to the equation (5), but with the second-order Taylor approximation, we define the secondorder truncated secular equation by w2( ;µ) = mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 2 nX i=m+1 c2i · ( i µ) (µ+ )3 2 ⇢2 , (9) where µ m is an input parameter that approximates the unobserved eigenvalues m+1, · · · , n. 3.1 Existence and Uniqueness for the Root The lemma blew shows the existence and uniqueness of the root of w2(·;µ). Lemma 2. With µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i , (10) the function w2(·;µ) as defined in (9) admits a unique root. Proof. When µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i , the third summation in the definition (9) vanishes, and hence w2( , µ) becomes the same as w1( , µ), except with a specific choice of µ. The desired conclusion then follows from Lemma 1. Unlike its first-order counterpart, we do not develop the existence and uniqueness of the root of the second-order truncated secular equation for arbitrary µ. The reason is that when 1 > 0, w2(0;µ) can potentially be positive or negative. 3.2 Error Analysis Similar to that for the first-order truncated secular equation, we can also derive an error bound for the root of the second-order truncated secular equation. Theorem 2. Let ⇤2 and ⇤ be the unique root of w2( ;µ) and w( ), respectively, and µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i . Then, | ⇤2 ⇤ | Cm · max m+1in ( i µ) 2, (11) where Cm > 0 is a constant bounded by 3kbk2 ( m 1)4 · min n ( n+B1) 3 2kbk2 , ⇢2 2B1 o with B1 = 1+ p 21+4⇢·kbk 2 being an upper bound for | ⇤ 2 |. The proof of Theorem 2 can be found in Appendix B. We can similarly estimate the approximation quality by combining Theorem 2 and Proposition 3. Again, the right-hand side of the error bound (11) is decreasing in m. We should also point out that the CRS solver based on the second-order secular equation outperforms the first-order counterpart only if maxm+1in | i µ|/| m 1| is small enough. The computation of µ here requires cm+1, · · · , cn, which seem to be inaccessible. We provide a tractable form for µ in (13) and will discuss it in the next part. 4 Implementation Details We now discuss the implementation details for solving the proposed first-order secular equation for CRS. First, we obtain the partial eigen information { 1, · · · , m} and {v1, · · · ,vm} by Krylov subspace methods. Note that only Hessian-vector products are required. This is computationally friendlier than other methods that rely on matrix inversions and is particularly suitable for modern, high-dimensional applications. Then, we solve the first-order secular equation (5) with µ defined in (7) or (10), using any root-finding algorithm, such as Newton’s method. Finally, we solve the linear system (A+ ⇤I)x = b by iterative algorithms, e.g., the Lanczos method and the conjugate gradient method. The resulting CRS solver, namely the approximate secular equation method (ASEM), is summarized as follows: Step 1: obtaining the partial eigen information { 1, · · · , m} and {v1, · · · ,vm} of A. Step 2: solving the secular equation (5) with µ defined in (7) or (10); we get ⇤. Step 3: iteratively solving the linear system (A+ ⇤I)x+ b = 0. Output: the solution x. Details for Step 1. The Krylov subspace is one of the most popular iterative methods in solving eigen problems with O(mn2) computational cost [13]. The Lanczos decomposition for a real symmetric matrix B satisfies BUk = TkUk + kuk+1e T k , where Uk 2 Rn⇥k is an orthonormal matrix (i.e., UTkUk = Ik), Tk 2 Rk⇥k is a symmetric tridiagonal matrix and ek 2 Rk is the k-th standard basis vector in Rk. Lanczos observed that even for comparatively small k, Tk approximates B very well in terms of eigenvalues and eigenvectors. Specifically, for a suitable eigenpair ( ,w) of Tk with Tkw = · w, the pair ( ,Ukw) is an approximate eigenpair of B, i.e., Bz ⇡ ·z with z = Ukw. Here, the Krylov subspace is constructed by u1, the first column of Uk = [u1, · · · ,uk], i.e., Kk(B,u1). Note that Tk approximates B for eigenvalues with largest modulus (or absolute values) and the corresponding eigenvectors. Empirically, for calculating m eigenvalues of B with largest absolute values and the corresponding eigenvectors, we usually construct the Krylov subspace Kk(B,u1) with dimension k = max{2m, 20}. The base vector u1 is also essential for the Krylov subspace method. Moreover, restarting is adopted to iteratively update the base vector u1. To the best of our knowledge, the mentioned iterative method for partial eigen information is supported in many softwares, e.g., Matlab (eigs function) and Python (Scipy package) etc. For more details, please refer to ARPACK [11]. Returning back to the proposed algorithm with A, instead of calculating the largest (in terms of the absolute value) m eigenvalues of A, we aim to get m (algebraically) smallest eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm}. We first roughly calculate a shift value kAk by several steps of power iteration (Hessian-vector products). Then, let B = ·I A, whose eigenvalues { n, · · · , 1} are non-negative and the corresponding eigenvectors are {vn, · · · ,v1}. Applying the mentioned Krylov subspace algorithm, we first obtain an estimate of shifted eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm} for A, since { m, · · · , 1} are largest eigenvalues of B. To further lower the computational cost, we may adopt k-dimensional Krylov subspace Kk(B,u1) for m eigenvalues without restarting in implementation with k = m. Details for Step 2. Instead of directly solving (5), Cartis et al. [3] recommended to find the root for the equivalent equation: w̃1( ;µ) = vuut mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 ⇢ , (12) which is convex on ( 1,+1). Moreover, under perfect initialization, Newton’s method is proved to achieve (locally) quadratic convergence. However, we numerically find that it depends much on the initialization and may converge to a point outside the feasible domain ( 1,+1) if it has imperfect initialization. Here, we recommend to use the bisection method to find the root of (5) or (12) due to its linear convergence, stability and ease of implementation. For the weighted average µ defined in (10), we can rewrite it as a more tractable but equivalent form: µ2 = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i = bT(A Vm⇤mVTm)b kbk2 Pm i=1 c 2 i = bTAb Pm i=1 c 2 i · i kbk2 Pm i=1 c 2 i , (13) where Vm = [v1, · · · ,vm] and ⇤m = diag( 1, · · · , m). Details for Step 3. There are many well-studied, efficient and reliable iterative methods for (real symmetric) linear systems, e.g., Krylov subspace (Lanczos) methods and conjugate gradient methods etc. We adopt the Lanczos method for solving the linear system (A+ ⇤I)x+ b = 0, where only a few steps of Hessian-vector products are required. In summary, the main computational cost comes from Step 1 and Step 3 for Hessian-vector products (O(mn2)), since solving the root of w1( ;µ) is a 1-dimensional problem in Step 2 and is of cost O(n). Therefore, the total computational cost for the proposed algorithm is O(mn2), much lower than the method based on full eigendecomposition (O(n3)). The selection of m. The choice of the parameter m is important to our CRS solver: a larger m yields a better CRS solution quality but incurs a higher computational cost. If A is a Gaussian random matrix, by (8), we can achieve "-accuracy (i.e., | ⇤1 ⇤| ") if m n satisfies ✓ 3⇡ 4 p 2 ◆2/3 · ✓ 1 m+ 1 n ◆2/3 ". However, the error bound (8) provides only a conservative sufficient conditions for m. Moreover, for general problems without the Gaussian assumption on A, it is hard to choose m based on the "-accuracy, because the error bounds (6) and (11) are implicit in m. Therefore, adaptive methods (or heuristic methods) for selecting m are necessary in practice. A natural way is to check the suboptimality (gradient norm) in each step, and enlarge m by m0 (i.e., we set m = max{m + m0,mmax}, where mmax is the maximal number of eigenvalues we adopt in ASEM), if the output does not satisfy the given condition for suboptimality. Moreover, numerical experiments on CUTEst problems (see Experiment 6 in Section 5) shows that m = 1 is enough for most of the cases. We left the study for the selection of m as future work. To the best of our knowledge, the Krylov subspace method [3, 2] for CRS suffers from a similar issue of hyperparameter selection. 5 Experimental Results Without the loss of generality (see Appendix D), we assume that A is diagonal in the synthetic CRS instances, for simplicity and fair comparison. Furthermore, we also test the proposed ASEM on CUTEst library [7]. All experiments were run on a Macbook Pro M1 laptop. For more experimental details, please refer to Appendix E. Experiment 1. The distribution for eigenvalues of the matrix A. In the error analysis (Theorem 1 and Theorem 2), the error is controlled by the approximation of µ to unobserved eigenvalues { i}ni=m+1, i.e., maxm+1in | i µ| and maxm+1in( i µ)2. It further implies that the distribution of eigenvalues { i}ni=m+1 is essential for the proposed method. Intuitively, if eigenvalues { i}ni=m+1 cluster around a small interval, then the rough estimate (7) for µ is enough to approximate the unknown eigenvalues well. Conversely, if eigenvalues { i}ni=m+1 spread across a large interval, then we cannot expect a single µ to estimate all eigenvalues { i}ni=m+1. Here, we have four specially designed cases for distributions of eigenvalues of the matrix A to illustrate our theoretical observations for the proposed method. Case 1 (evenly spaced): all eigenvalues { i}ni=1 are evenly spaced in [ 1, 1]; Case 2 (separated): half of eigenvalues are far away from the remaining, i.e., eigenvalues { i} n/2 i=1 are evenly spaced in [ 1, 4/5] and the remaining eigenvalues { i} n i=n/2+1 are evenly spaced in [4/5, 1]; Case 3 (right centered): the minimal 2% of eigenvalues and the remaining 98% of eigenvalues gather together respectively, i.e., eigenvalues { i} n/50 i=1 and { i} n i=n/50+1 are spaced evenly in [ 1, 4/5] and [4/5, 1] respectively; Case 4 (left centered): the maximal 2% of eigenvalues and the remaining 98% of eigenvalues gather together respectively, i.e., eigenvalues { i} 49/50n i=1 and { i}ni=49/50n+1 are evenly spaced in [ 1, 4/5] and [4/5, 1] respectively. The vector b is proportional to [1, · · · , 1]T with kbk = 0.1. The remaining parameters are n = 5⇥ 103 and ⇢ = 0.1. Here we adopt the first-order ASEM (i.e., µ is defined in (7)). Figure 1 validates our theories that the proposed algorithm converges fast if unknown eigenvalues { i}ni=m+1 are close. Moreover, without the need to compute all eigenvalues, partial eigen information is enough to achieve satisfactory solutions in practice, except for the hard case (e.g., Case 4). Experiment 2. The effect of the parameter µ (first-order and second-order ASEMs). For the first- and second-order ASEMs, we define µ according to (7) and (10) respectively. Here, we test the effect of µ for the proposed method in solving the cubic regularized quadratic problem, with other parameters fixed. Case 1 (first-order ASEM): µ is adopted as the mean value of unknown eigenvalues, defined in (7); Case 2 (second-order ASEM): µ is selected as the weighted average of unknown eigenvalues with weights c2i (see (10)); Case 3 (first-order ASEM): µ = m, the maximal eigenvalue we observe; Case 4: µ = 106, much larger than the eigenvalues of A, as an approximation to +1. The vector b is proportional to [ 1, · · · , n]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. The remaining parameters are d = 5 ⇥ 103 and ⇢ = 0.1. Figure 2 shows the superiority of the second-order ASEM over the first-order ASEM that it is more stable and converges faster when m is large, consistent with Theorem 1 and Theorem 2. Moreover, the results further imply the importance of the choice of µ. There are several observations from Figure 2. Firstly, we cannot discard the residual term with the unknown eigenvalues { i}ni=m+1, where they still contain much information, as is shown in Case 1 and Case 4. Secondly, the random selection of µ does not work well and may even cause divergence (e.g., see Case 3 and Case 4). Furthermore, a suitable choice of µ leads to a well-behaved algorithm (e.g., see Case 1 and Case 2). Experiment 3. Approximation capabilities of the Krylov subspace method for ASEM. As is introduced in Section 4, we adopt m-dimensional Krylov subspace Km(B,u1) to approximately calculate m algebraically smallest eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm}. We now investigate the performance of ASEM with estimated eigenvalues and eigenvectors. The vector b is proportional to the vector [1, · · · , 1]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. The remaining parameters are n = 5⇥ 103 and ⇢ = 0.1. We adopt the first-order ASEM (i.e., µ is defined in (7)) here. Trajectories for suboptimality with exact and approximated eigenvalues and eigenvectors are shown in Figure 3. The Krylov subspace Km(B,u1) with relatively low dimension m for ASEM matches well with ASEM with exact eigenvalues. This experiment justifies the use of the m-dimensional Krylov subspace for m eigenvalues and eigenvectors in ASEM. Experiment 4. Comparison of ASEM with the Krylov subspace method [3, 2] and the gradient descent method [1] on synthetic problems. For large-scale problems, the Krylov subspace method and the gradient descent method are two state-of-the-art methods for CRS (1). In this experiment, we compare the proposed ASEM against the Krylov subspace method and the gradient descent method. The dominant computation steps for these three methods are Hessian-vector products (O(mn2)). The vector b is proportional to the vector [1, · · · , 1]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. Similar to the setting in [2], we define the condition number for (1) as = n+⇢·kx ⇤k 1+⇢·kx⇤k = n+ ⇤ 1+ ⇤ . Then, we have ⇤ = n · 1 1 and ⇢ = ⇤ k(A+ ⇤I) 1bk . Case 1: easy case that = 103; Case 2: harder case that = 106. The remaining parameter is n = 5⇥ 103. We adopt the first-order ASEM (i.e., µ is defined in (7)). As shown in Figure 4, ASEM outperforms both the gradient descent method and the Krylov subspace method when m is relatively large. It is reasonable that ASEM underperforms when m is small since the m-dimensional Krylov subspace cannot well approximate eigenvalues and eigenvectors of A. The results further demonstrate the performance of the proposed ASEM method. Experiment 5. Comparison of ASEM with the Cauchy point method [5], the gradient descent method and the Krylov subspace method on the CUTEst problems [7]. The CUTEst library collects various unconstrained and constrained optimization problems that arise in real applications. In this part, we compare the numerical performances of the ARC algorithm [3] on four unconstrained optimization problems from the CUTEst library, where subproblems are solved by the Cauchy point method (ARCCP), the gradient descent method (ARC-GD), the Krylov subspace method (ARC-Krylov(k), where k is the number of Lanczos basis vectors) and the ASEM method (ARC-ASEM(m), where m is the number of eigenvalues for ASEM). The architecture of the ARC algorithm is provided in Appendix E.2. Four unconstrained optimization problems (e.g., TOINTGSS, BRYBAND, DIXMAANG, and TQUARTIC) in the CUTEst library are adopted for testing, where the dimensions are 1000, 2000, 3000, and 5000, respectively. We use the first-order ASEM (µ is defined in (7)) here since we found that the performances of the first-order ASEM and the second-order ASEM do not differ much for these problems. Numerical results are reported in Table 1, where xout, krf(xout)k, 1(r2f(xout)), iter and time represent the output of the ARC algorithm, the suboptimality (gradient norm), the minimal eigenvalue of the Hessian matrix, number of iterations for the ARC and CPU time, respectively. Here are several observations. Firstly, the proposed ASEM algorithm outperforms others in most cases and is comparable to the Krylov subspace method sometimes, where the ASEM achieves a worse suboptimality (gradient norms) or CPU time. Furthermore, only one eigenvalue is enough for the ASEM to perform well (i.e., ARC-ASEM(m) with m = 1), which is surprising. For more experimental details, please refer to Appendix E. 6 Conclusion We develop the first-order and the second-order truncated secular equations as surrogates to the secular equation with full eigendecomposition in solving the CRS (1). The proposed ASEM is an efficient alternative to existing methods for solving CRS since it reduces the computational cost from O(n3) to O(mn2). Our CRS solvers feature rigorous theoretical error bound, which is related to the amount of eigen information used. We also discuss in detail the implementation of our proposed algorithm ASEM. In particular, we show how only Hessian-vector products are needed, but not matrix inversion. Numerical experiments are conducted to further investigate the properties and performance of the proposed ASEM and corroborate with the theoretical results. From our experiments, we find that the proposed ASEM is more efficient than the state-of-the-art methods on synthetic and CUTEst problems. Acknowledgement We would like to thank the anonymous reviewers and chairs for their helpful comments. Michael K. Ng is supported by Hong Kong Research Grant Council GRF 12300218, 12300519, 17201020, 17300021, C1013-21GF, C7004-21GF and Joint NSFC-RGC N-HKU76921. Man-Chung Yue is supported by the Research Grants Council (RGC) of Hong Kong under the GRF project 15305321.
1. What is the focus and contribution of the paper regarding nonconvex optimization? 2. What are the strengths of the proposed scheme, particularly in terms of efficiency and generality? 3. Do you have any concerns or questions about the numerical experiments or the error bounds presented in the paper? 4. Are there any minor issues or typos in the paper that could be improved? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes an efficient and general scheme for finding approximate solutions to the cubic regularization subproblem (CRS) used in the classic cubic regualization method in nonconvex optimization. The main advantage of the scheme lies in the efficient computation of the unique root in a given truncated approximate secular equation (ABE) that scales as O ( m n 2 ) where n is the dimension of the underlying problem and m < n is a parameter that balances the accuracy of the scheme with the overall computational cost. Numerical experiments are given to demonstrate the behavior of the scheme on different parameter choices, different kinds of problem instances, and a subset of the well-known CUTEst optimization problem dataset. Strengths And Weaknesses Disclaimer: My review is limited to the material presented in the 9-page body of the paper, and does not consider the materials in the supplement. Strengths The numerical experiments are significant in their scope and quality. Specifically, they test the behavior of the proposed scheme (ASEM) with respect to its key parameter μ and the distribution of eigenvalues in the CR subproblem. I also appreciate the inclusion of other well-known algorithms in the benchmarks, such as gradient descent, the Cauchy-point method, and the Krylov subspace method. The error bounds in Theorem 1 and 2 are highly appreciated, as they encapsulate the expected behavior of the scheme and do abuse asymptotic notation to hide any universal constants. The complexity improvement from O ( n 3 ) to O ( m n 2 ) is impactful both from a theoretical and practical point-of-view. The writing of the paper is both clear and concise. Moreover, the remarks following some of the more important results, e.g., Proposition 3, are both welcome and informative. (Minor) Weaknesses There are few places that could better with additional clarifying statements (see the Questions section below). There are few minor typos (see the Questions section below). Questions Line 36: Do you mean V Λ V T ? Proposition 2: Make this slightly more self-contained, by re-iterating what v 1 and x ∗ are. End of Section 1: A topic that might arouse more interest in the paper early on is a discussion on the choice of m (which is discussed later, starting on line 211). Hence, it would be helpful to add 1-2 sentences at the end of Section 1 to say that this is a topic that will be discussed later in the paper. The choice of μ in Theorem 2 does not seem to be immediately implementable, since it relies on λ m + 1 , … , λ n . However, equation (13) later on gives a tractable form for μ . Perhaps a remark in (or after) the Theorem should be made to direct the reader to (13). Line 197: "...if with imperfect initialization." Missing word? Line 183: Return <- Returning Line 289: "The rest of the parameter..." <- "The remaining parameter..." The words "Krylov" and "Krylob" are used interchangeably throughout. I would suggest picking only one of these spellings. Limitations The authors have sufficiently addressed all limitations (assumptions) in this paper.
NIPS
Title Approximate Secular Equations for the Cubic Regularization Subproblem Abstract The cubic regularization method (CR) is a popular algorithm for unconstrained non-convex optimization. At each iteration, CR solves a cubically regularized quadratic problem, called the cubic regularization subproblem (CRS). One way to solve the CRS relies on solving the secular equation, whose computational bottleneck lies in the computation of all eigenvalues of the Hessian matrix. In this paper, we propose and analyze a novel CRS solver based on an approximate secular equation, which requires only some of the Hessian eigenvalues and is therefore much more efficient. Two approximate secular equations (ASEs) are developed. For both ASEs, we first study the existence and uniqueness of their roots and then establish an upper bound on the gap between the root and that of the standard secular equation. Such an upper bound can in turn be used to bound the distance from the approximate CRS solution based ASEs to the true CRS solution, thus offering a theoretical guarantee for our CRS solver. A desirable feature of our CRS solver is that it requires only matrix-vector multiplication but not matrix inversion, which makes it particularly suitable for high-dimensional applications of unconstrained non-convex optimization, such as low-rank recovery and deep learning. Numerical experiments with synthetic and real data-sets are conducted to investigate the practical performance of the proposed CRS solver. Experimental results show that the proposed solver outperforms two state-of-the-art methods. 1 Introduction The cubic regularization method (CR) is a variant of Newton’s method proposed by Griewank [8], and later independently by Nesterov and Polyak [12], and Weiser et al. [16]. It gained significant attention over the last decade due to its attractive theoretical properties, such as convergence to second-order critical points[12] and quadratic convergence rate under mild assumptions [17]. Each iteration of CR solves a problem of the following form, called the cubic regularization subproblem (CRS): min x2Rn fA,b,⇢(x) := b Tx+ 1 2 xTAx+ ⇢ 3 kxk3, (1) where ⇢ > 0 is the regularization parameter, b 2 Rn, and A 2 Rn⇥n is a symmetric matrix, not necessarily positive semidefinite. Many variants and generalizations of CR are developed, including 36th Conference on Neural Information Processing Systems (NeurIPS 2022). the Adaptive Regularization Using Cubics (ARC) which allows for a dynamic choice of ⇢ and inexact CRS solutions [3, 4], accelerated CR using momentum [15] and stochastic CR for solving stochastic optimization [14]. Despite the theoretical success, the practicality of CR and its variants relies critically on the CRS solver, a topic that attracts considerable research recently [2, 1, 10, 9]. The goal of this paper is to develop a novel, efficient CRS solver along with theoretical guarantees. A popular approach for solving the CRS is via solving the so-called secular equation. We now review this approach. Towards that, we denote by 1 · · · n the eigenvalues of A and by v1, · · · ,vn the corresponding eigenvectors. In other words, we have the eigendecomposition A = Pn i=1 iviv T i = V⇤V T, where ⇤ = diag( 1, . . . , n) and V = [v1, · · · ,vn]. Note that eigenvalues i are not necessarily positive due to the indefiniteness of the matrix A. Also, we denote the Euclidean norm by k · k. Proposition 1 ([12, 3]). A vector x⇤ solves the CRS (1) if and only if it satisfies the system ⇢ (A+ ⇢kx⇤kI)x⇤ + b = 0, A+ ⇢kx⇤kI ⌫ 0. (2) (3) Moreover, if A+ ⇢kx⇤kI 0, then x⇤ is the unique solution (and hence a critical point). Proposition 2 ([1]). Let x⇤ be a global solution of CRS (1) and the eigendecomposition for A = n i=1 iviv T i = V⇤V T , where ⇤ = diag( 1, . . . , n) and V = [v1, · · · ,vn]. If bTv1 6= 0, then A + ⇢kx⇤kI 0 and the solution x⇤ is the unique critical point (and hence the unique solution). Conversely, if bTv1 = 0, then the CRS (1) has multiple optimal solutions. From Proposition 2, if bTv1 6= 0, then there is only one critical point, which is also the optimal solution, and hence the gradient norm krfA,b,⇢(x)k serves as an optimality measure. Throughout the paper, we assume bTv1 6= 0, under which the CRS is said to be in the easy case. This is without much loss of generality as this holds generically true in practice. Moreover, we could easily avoid the hard case (bTv1 = 0) by slightly perturbing the vector b, see [12, 2]. To introduce the secular equation, note that in the easy case, conditions (2) and (3) can be written as ⇢ (⇤+ I) · y⇤ = c, 1 + > 0. where =: ⇢kx⇤k, [y⇤1 , · · · , y⇤n]T := y⇤ = VTx⇤ and [c1, · · · , cn]T := c = VTb. Therefore, y⇤i = ci i + , i = 1, . . . , n. Since the Euclidean norm is invariant to orthogonal transformation, we have 2 ⇢2 = kx⇤k2 = ky⇤k2 = nX i=1 c2i ( i + )2 . Consequently, instead of solving the complicated nonlinear system (2)-(3), we could solve the CRS (1) by first finding the (unique) root > max{ 1, 0} of the equation w( ) = nX i=1 c2i ( i + )2 2 ⇢2 , (4) called the secular equation, and then solves the linear system (A+ I)x = b. The first step can be done efficiently by using existing root-finding algorithms (e.g., the bisection method and Newton’s method etc.). The disadvantage of the above CRS solver, based on the secular equation (4), is that it requires the full spectrum of A, which costs O(n3). This approach is viable only for low- to moderate-dimensional problems. However, when n is large, computing all eigenvalues of A is prohibitive. Worse still, after the root is solved, we still need to apply iterative methods (e.g., Lanczos method) to solve the large-scale linear system (2). We are thus motivated to approximate the secular equation by using only some of the eigenvalues of A, as opposed to all. As our main contribution, we developed two different approximate secular equations (ASEs), both of which require computing m < n eigenvalues of A. The cost for forming the approximate secular equations is only O(mn2), and hence the resulting CRS solver is much more efficient and scalable. On the theoretical side, for each of the proposed approximate secular equations, we first studied the existence and uniqueness of its root, and then derived an upper bound on the gap between the root and that of the standard secular equation (4). This upper bound is in turn used to bound the distance from the approximate CRS solution based ASEs to the true CRS solution, thus offering a theoretical guarantee for the proposed CRS solver. A desirable feature of our CRS solver is that it requires only matrix-vector multiplication but not matrix inversion, which makes it particularly suitable for high-dimensional applications of unconstrained non-convex optimization, such as low-rank recovery and deep learning. On the empirical side, we conducted experiments with both synthetic and real problem instances to investigate the practical performance of the proposed CRS solver and the associated CR. Experimental results showed that the proposed solver outperforms two state-of-the-art methods. The selection of m for the proposed ASEM is an interesting and crucial topic. We will discuss related issues in Section 4 and some numerical explorations are also presented in Section 5. 2 The First-Order Truncated Secular Equation We define the first-order truncated secular equation by w1( ;µ) = mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 2 ⇢2 , (5) where µ m is an input parameter that approximates the unobserved eigenvalues m+1, · · · , n, ci = bTvi and Pn i=m+1 c 2 i = kbk 2 Pm i=1 c 2 i . Note that only m eigenvalues 1, · · · , m and their corresponding eigenvectors v1, · · · ,vm are needed to form (5), which is computationally friendlier compared with (4). The name first-order truncated secular equation comes from the fact that w1( , µ) is the first-order Taylor approximation to the function w( ). Below we will first study the existence and uniqueness for the root of (5). Then, we derive an error bound for the root. 2.1 Existence and Uniqueness for the Root In the easy case that bTv1 6= 0 (equivalently, c1 6= 0), the solution x⇤ to the CRS (1) is unique, which implies the existence and uniqueness for the root ⇤ of (4). To show that our proposed CRS solver is also well-defined, we prove the existence and uniqueness of the root of the first-order truncated secular equation (5). Lemma 1. For any µ m, the function w1(·;µ) as defined in (5) admits a unique root. Proof. Existence. We first consider the case when 1 0. Then, for any fixed µ m, lim !( 1)+ w1( ;µ) = +1 and lim !+1 w1( ;µ) = 1, By the intermediate value theorem, w1(·;µ) has a root in ( 1,+1). For 1 > 0, we have w1(0;µ) > 0 and lim !+1 w1( ;µ) = 1. Therefore, w1( ;µ) has a root in (0,+1). Uniqueness. Note that w1( ;µ) is monotonically decreasing for 2 ( 1,+1) and 2 (0,+1) when 1 0 and 1 > 0, respectively. Therefore, the uniqueness of the root for w1( ;µ) is guaranteed. 2.2 Error Analysis In order to study the quality of the CRS solution based on our proposed solver using approximate secular equations, we need to study the quality of the root to the first-order truncated secular equation, denoted by ⇤1 . Towards that end, we provide an upper bound on the gap | ⇤1 ⇤| between ⇤1 and the root ⇤ of the exact secular equation (4). Theorem 1. Let ⇤1 and ⇤ be the unique roots of w1( ;µ) and w( ), respectively. Then | ⇤1 ⇤ | Cm · max m+1in | i µ|, (6) where Cm > 0 is a constant, upper bounded by 2kbk2 ( m 1)3 · min n ( d+B1) 3 2kbk2 , ⇢2 2B1 o with B1 = 1+ p 21+4⇢·kbk 2 being an upper bound for | ⇤ 1 |. We clearly see that the right-hand side of inequality (1) is decreasing in m. This confirms that using more eigen information (i.e., larger m) helps to reduce the error | ⇤1 ⇤|. The proof of Theorem 1 is technical and quite long and hence relegated to Appendix A. The approximation quality of our CRS solver is guaranteed by combining Theorem 1 with the following proposition. Proposition 3. Let x⇤ and x̃ be solutions to the equations (A+ ⇤I)x⇤ = b and (A+ ⇤1I) x̃ = b, respectively. Then, kx̃ x⇤k = O (| ⇤1 ⇤ |). Proof. By definition, we have x⇤ = nX i=1 ( i + ⇤) 1 viv T i · ( b) = nX i=1 ( i + ⇤) 1 ci · vi, and x̃ = nX i=1 ( i + ⇤ 1) 1 viv T i · ( b) = nX i=1 ( i + ⇤ 1) 1 ci · vi, then kx̃ x⇤k = nX i=1 ⇣ ( i + ⇤ 1) 1 ( i + ⇤) 1 ⌘ viv T i · ( b) ✓ max 1in n ( i + ⇤ 1) 1 ( i + ⇤) 1 o◆ · kbk = O (| ⇤1 ⇤ |) . This completes the proof. Before ending this section, some remarks are in order. First, the parameter µ acts as an approximation to n m unknown eigenvalues m+1, · · · , n. An intuitive choice of µ that works well in practice and is computationally cheap is the average of unknown eigenvalues, i.e., µ1 = Pn i=m+1 i n m = tr(A) Pm i=1 i n m . (7) Second, the error bound Cm ·maxm+1in | i µ| in Theorem 1 depends on the distribution of eigenvalues of A. If the unobserved eigenvalues m+1, · · · , n cluster around a small interval, then with a suitable choice of µ 2 [ m+1, n], maxm+1in | i µ| is small. Conversely, if the unknown eigenvalues spread over a large interval, then it is hard to make the error maxm+1in | i µ| small. Third, it is instructive to study the error bound (6) under some random matrix model for A. Suppose that A = eA/ p 2n, where eA is a symmetric random matrix with i.i.d. entries on and above the diagonal. By the Wigner semicircle law [6], as n ! 1, the eigenvalues of A distribute according to a density of a semi-circle shape. In particular, we can deduce that with a probability of 1 o(1), max m+1in | i µ| O ✓ 1 m+ 1 n ◆2/3! ⇡ ✓ 3⇡ 4 p 2 ◆2/3 · ✓ 1 m+ 1 n ◆2/3 (8) The detailed proof of (8) and further discussions under random A can be found in Appendix C. 3 The Second-Order Truncated Secular Equation Similarly to the equation (5), but with the second-order Taylor approximation, we define the secondorder truncated secular equation by w2( ;µ) = mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 2 nX i=m+1 c2i · ( i µ) (µ+ )3 2 ⇢2 , (9) where µ m is an input parameter that approximates the unobserved eigenvalues m+1, · · · , n. 3.1 Existence and Uniqueness for the Root The lemma blew shows the existence and uniqueness of the root of w2(·;µ). Lemma 2. With µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i , (10) the function w2(·;µ) as defined in (9) admits a unique root. Proof. When µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i , the third summation in the definition (9) vanishes, and hence w2( , µ) becomes the same as w1( , µ), except with a specific choice of µ. The desired conclusion then follows from Lemma 1. Unlike its first-order counterpart, we do not develop the existence and uniqueness of the root of the second-order truncated secular equation for arbitrary µ. The reason is that when 1 > 0, w2(0;µ) can potentially be positive or negative. 3.2 Error Analysis Similar to that for the first-order truncated secular equation, we can also derive an error bound for the root of the second-order truncated secular equation. Theorem 2. Let ⇤2 and ⇤ be the unique root of w2( ;µ) and w( ), respectively, and µ = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i . Then, | ⇤2 ⇤ | Cm · max m+1in ( i µ) 2, (11) where Cm > 0 is a constant bounded by 3kbk2 ( m 1)4 · min n ( n+B1) 3 2kbk2 , ⇢2 2B1 o with B1 = 1+ p 21+4⇢·kbk 2 being an upper bound for | ⇤ 2 |. The proof of Theorem 2 can be found in Appendix B. We can similarly estimate the approximation quality by combining Theorem 2 and Proposition 3. Again, the right-hand side of the error bound (11) is decreasing in m. We should also point out that the CRS solver based on the second-order secular equation outperforms the first-order counterpart only if maxm+1in | i µ|/| m 1| is small enough. The computation of µ here requires cm+1, · · · , cn, which seem to be inaccessible. We provide a tractable form for µ in (13) and will discuss it in the next part. 4 Implementation Details We now discuss the implementation details for solving the proposed first-order secular equation for CRS. First, we obtain the partial eigen information { 1, · · · , m} and {v1, · · · ,vm} by Krylov subspace methods. Note that only Hessian-vector products are required. This is computationally friendlier than other methods that rely on matrix inversions and is particularly suitable for modern, high-dimensional applications. Then, we solve the first-order secular equation (5) with µ defined in (7) or (10), using any root-finding algorithm, such as Newton’s method. Finally, we solve the linear system (A+ ⇤I)x = b by iterative algorithms, e.g., the Lanczos method and the conjugate gradient method. The resulting CRS solver, namely the approximate secular equation method (ASEM), is summarized as follows: Step 1: obtaining the partial eigen information { 1, · · · , m} and {v1, · · · ,vm} of A. Step 2: solving the secular equation (5) with µ defined in (7) or (10); we get ⇤. Step 3: iteratively solving the linear system (A+ ⇤I)x+ b = 0. Output: the solution x. Details for Step 1. The Krylov subspace is one of the most popular iterative methods in solving eigen problems with O(mn2) computational cost [13]. The Lanczos decomposition for a real symmetric matrix B satisfies BUk = TkUk + kuk+1e T k , where Uk 2 Rn⇥k is an orthonormal matrix (i.e., UTkUk = Ik), Tk 2 Rk⇥k is a symmetric tridiagonal matrix and ek 2 Rk is the k-th standard basis vector in Rk. Lanczos observed that even for comparatively small k, Tk approximates B very well in terms of eigenvalues and eigenvectors. Specifically, for a suitable eigenpair ( ,w) of Tk with Tkw = · w, the pair ( ,Ukw) is an approximate eigenpair of B, i.e., Bz ⇡ ·z with z = Ukw. Here, the Krylov subspace is constructed by u1, the first column of Uk = [u1, · · · ,uk], i.e., Kk(B,u1). Note that Tk approximates B for eigenvalues with largest modulus (or absolute values) and the corresponding eigenvectors. Empirically, for calculating m eigenvalues of B with largest absolute values and the corresponding eigenvectors, we usually construct the Krylov subspace Kk(B,u1) with dimension k = max{2m, 20}. The base vector u1 is also essential for the Krylov subspace method. Moreover, restarting is adopted to iteratively update the base vector u1. To the best of our knowledge, the mentioned iterative method for partial eigen information is supported in many softwares, e.g., Matlab (eigs function) and Python (Scipy package) etc. For more details, please refer to ARPACK [11]. Returning back to the proposed algorithm with A, instead of calculating the largest (in terms of the absolute value) m eigenvalues of A, we aim to get m (algebraically) smallest eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm}. We first roughly calculate a shift value kAk by several steps of power iteration (Hessian-vector products). Then, let B = ·I A, whose eigenvalues { n, · · · , 1} are non-negative and the corresponding eigenvectors are {vn, · · · ,v1}. Applying the mentioned Krylov subspace algorithm, we first obtain an estimate of shifted eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm} for A, since { m, · · · , 1} are largest eigenvalues of B. To further lower the computational cost, we may adopt k-dimensional Krylov subspace Kk(B,u1) for m eigenvalues without restarting in implementation with k = m. Details for Step 2. Instead of directly solving (5), Cartis et al. [3] recommended to find the root for the equivalent equation: w̃1( ;µ) = vuut mX i=1 c2i ( i + )2 + nX i=m+1 c2i (µ+ )2 ⇢ , (12) which is convex on ( 1,+1). Moreover, under perfect initialization, Newton’s method is proved to achieve (locally) quadratic convergence. However, we numerically find that it depends much on the initialization and may converge to a point outside the feasible domain ( 1,+1) if it has imperfect initialization. Here, we recommend to use the bisection method to find the root of (5) or (12) due to its linear convergence, stability and ease of implementation. For the weighted average µ defined in (10), we can rewrite it as a more tractable but equivalent form: µ2 = Pn i=m+1 c 2 i · iPn i=m+1 c 2 i = bT(A Vm⇤mVTm)b kbk2 Pm i=1 c 2 i = bTAb Pm i=1 c 2 i · i kbk2 Pm i=1 c 2 i , (13) where Vm = [v1, · · · ,vm] and ⇤m = diag( 1, · · · , m). Details for Step 3. There are many well-studied, efficient and reliable iterative methods for (real symmetric) linear systems, e.g., Krylov subspace (Lanczos) methods and conjugate gradient methods etc. We adopt the Lanczos method for solving the linear system (A+ ⇤I)x+ b = 0, where only a few steps of Hessian-vector products are required. In summary, the main computational cost comes from Step 1 and Step 3 for Hessian-vector products (O(mn2)), since solving the root of w1( ;µ) is a 1-dimensional problem in Step 2 and is of cost O(n). Therefore, the total computational cost for the proposed algorithm is O(mn2), much lower than the method based on full eigendecomposition (O(n3)). The selection of m. The choice of the parameter m is important to our CRS solver: a larger m yields a better CRS solution quality but incurs a higher computational cost. If A is a Gaussian random matrix, by (8), we can achieve "-accuracy (i.e., | ⇤1 ⇤| ") if m n satisfies ✓ 3⇡ 4 p 2 ◆2/3 · ✓ 1 m+ 1 n ◆2/3 ". However, the error bound (8) provides only a conservative sufficient conditions for m. Moreover, for general problems without the Gaussian assumption on A, it is hard to choose m based on the "-accuracy, because the error bounds (6) and (11) are implicit in m. Therefore, adaptive methods (or heuristic methods) for selecting m are necessary in practice. A natural way is to check the suboptimality (gradient norm) in each step, and enlarge m by m0 (i.e., we set m = max{m + m0,mmax}, where mmax is the maximal number of eigenvalues we adopt in ASEM), if the output does not satisfy the given condition for suboptimality. Moreover, numerical experiments on CUTEst problems (see Experiment 6 in Section 5) shows that m = 1 is enough for most of the cases. We left the study for the selection of m as future work. To the best of our knowledge, the Krylov subspace method [3, 2] for CRS suffers from a similar issue of hyperparameter selection. 5 Experimental Results Without the loss of generality (see Appendix D), we assume that A is diagonal in the synthetic CRS instances, for simplicity and fair comparison. Furthermore, we also test the proposed ASEM on CUTEst library [7]. All experiments were run on a Macbook Pro M1 laptop. For more experimental details, please refer to Appendix E. Experiment 1. The distribution for eigenvalues of the matrix A. In the error analysis (Theorem 1 and Theorem 2), the error is controlled by the approximation of µ to unobserved eigenvalues { i}ni=m+1, i.e., maxm+1in | i µ| and maxm+1in( i µ)2. It further implies that the distribution of eigenvalues { i}ni=m+1 is essential for the proposed method. Intuitively, if eigenvalues { i}ni=m+1 cluster around a small interval, then the rough estimate (7) for µ is enough to approximate the unknown eigenvalues well. Conversely, if eigenvalues { i}ni=m+1 spread across a large interval, then we cannot expect a single µ to estimate all eigenvalues { i}ni=m+1. Here, we have four specially designed cases for distributions of eigenvalues of the matrix A to illustrate our theoretical observations for the proposed method. Case 1 (evenly spaced): all eigenvalues { i}ni=1 are evenly spaced in [ 1, 1]; Case 2 (separated): half of eigenvalues are far away from the remaining, i.e., eigenvalues { i} n/2 i=1 are evenly spaced in [ 1, 4/5] and the remaining eigenvalues { i} n i=n/2+1 are evenly spaced in [4/5, 1]; Case 3 (right centered): the minimal 2% of eigenvalues and the remaining 98% of eigenvalues gather together respectively, i.e., eigenvalues { i} n/50 i=1 and { i} n i=n/50+1 are spaced evenly in [ 1, 4/5] and [4/5, 1] respectively; Case 4 (left centered): the maximal 2% of eigenvalues and the remaining 98% of eigenvalues gather together respectively, i.e., eigenvalues { i} 49/50n i=1 and { i}ni=49/50n+1 are evenly spaced in [ 1, 4/5] and [4/5, 1] respectively. The vector b is proportional to [1, · · · , 1]T with kbk = 0.1. The remaining parameters are n = 5⇥ 103 and ⇢ = 0.1. Here we adopt the first-order ASEM (i.e., µ is defined in (7)). Figure 1 validates our theories that the proposed algorithm converges fast if unknown eigenvalues { i}ni=m+1 are close. Moreover, without the need to compute all eigenvalues, partial eigen information is enough to achieve satisfactory solutions in practice, except for the hard case (e.g., Case 4). Experiment 2. The effect of the parameter µ (first-order and second-order ASEMs). For the first- and second-order ASEMs, we define µ according to (7) and (10) respectively. Here, we test the effect of µ for the proposed method in solving the cubic regularized quadratic problem, with other parameters fixed. Case 1 (first-order ASEM): µ is adopted as the mean value of unknown eigenvalues, defined in (7); Case 2 (second-order ASEM): µ is selected as the weighted average of unknown eigenvalues with weights c2i (see (10)); Case 3 (first-order ASEM): µ = m, the maximal eigenvalue we observe; Case 4: µ = 106, much larger than the eigenvalues of A, as an approximation to +1. The vector b is proportional to [ 1, · · · , n]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. The remaining parameters are d = 5 ⇥ 103 and ⇢ = 0.1. Figure 2 shows the superiority of the second-order ASEM over the first-order ASEM that it is more stable and converges faster when m is large, consistent with Theorem 1 and Theorem 2. Moreover, the results further imply the importance of the choice of µ. There are several observations from Figure 2. Firstly, we cannot discard the residual term with the unknown eigenvalues { i}ni=m+1, where they still contain much information, as is shown in Case 1 and Case 4. Secondly, the random selection of µ does not work well and may even cause divergence (e.g., see Case 3 and Case 4). Furthermore, a suitable choice of µ leads to a well-behaved algorithm (e.g., see Case 1 and Case 2). Experiment 3. Approximation capabilities of the Krylov subspace method for ASEM. As is introduced in Section 4, we adopt m-dimensional Krylov subspace Km(B,u1) to approximately calculate m algebraically smallest eigenvalues { 1, · · · , m} and the corresponding eigenvectors {v1, · · · ,vm}. We now investigate the performance of ASEM with estimated eigenvalues and eigenvectors. The vector b is proportional to the vector [1, · · · , 1]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. The remaining parameters are n = 5⇥ 103 and ⇢ = 0.1. We adopt the first-order ASEM (i.e., µ is defined in (7)) here. Trajectories for suboptimality with exact and approximated eigenvalues and eigenvectors are shown in Figure 3. The Krylov subspace Km(B,u1) with relatively low dimension m for ASEM matches well with ASEM with exact eigenvalues. This experiment justifies the use of the m-dimensional Krylov subspace for m eigenvalues and eigenvectors in ASEM. Experiment 4. Comparison of ASEM with the Krylov subspace method [3, 2] and the gradient descent method [1] on synthetic problems. For large-scale problems, the Krylov subspace method and the gradient descent method are two state-of-the-art methods for CRS (1). In this experiment, we compare the proposed ASEM against the Krylov subspace method and the gradient descent method. The dominant computation steps for these three methods are Hessian-vector products (O(mn2)). The vector b is proportional to the vector [1, · · · , 1]T with length kbk = 0.1. Eigenvalues of the matrix A are evenly spaced in [ 1, 1]. Similar to the setting in [2], we define the condition number for (1) as = n+⇢·kx ⇤k 1+⇢·kx⇤k = n+ ⇤ 1+ ⇤ . Then, we have ⇤ = n · 1 1 and ⇢ = ⇤ k(A+ ⇤I) 1bk . Case 1: easy case that = 103; Case 2: harder case that = 106. The remaining parameter is n = 5⇥ 103. We adopt the first-order ASEM (i.e., µ is defined in (7)). As shown in Figure 4, ASEM outperforms both the gradient descent method and the Krylov subspace method when m is relatively large. It is reasonable that ASEM underperforms when m is small since the m-dimensional Krylov subspace cannot well approximate eigenvalues and eigenvectors of A. The results further demonstrate the performance of the proposed ASEM method. Experiment 5. Comparison of ASEM with the Cauchy point method [5], the gradient descent method and the Krylov subspace method on the CUTEst problems [7]. The CUTEst library collects various unconstrained and constrained optimization problems that arise in real applications. In this part, we compare the numerical performances of the ARC algorithm [3] on four unconstrained optimization problems from the CUTEst library, where subproblems are solved by the Cauchy point method (ARCCP), the gradient descent method (ARC-GD), the Krylov subspace method (ARC-Krylov(k), where k is the number of Lanczos basis vectors) and the ASEM method (ARC-ASEM(m), where m is the number of eigenvalues for ASEM). The architecture of the ARC algorithm is provided in Appendix E.2. Four unconstrained optimization problems (e.g., TOINTGSS, BRYBAND, DIXMAANG, and TQUARTIC) in the CUTEst library are adopted for testing, where the dimensions are 1000, 2000, 3000, and 5000, respectively. We use the first-order ASEM (µ is defined in (7)) here since we found that the performances of the first-order ASEM and the second-order ASEM do not differ much for these problems. Numerical results are reported in Table 1, where xout, krf(xout)k, 1(r2f(xout)), iter and time represent the output of the ARC algorithm, the suboptimality (gradient norm), the minimal eigenvalue of the Hessian matrix, number of iterations for the ARC and CPU time, respectively. Here are several observations. Firstly, the proposed ASEM algorithm outperforms others in most cases and is comparable to the Krylov subspace method sometimes, where the ASEM achieves a worse suboptimality (gradient norms) or CPU time. Furthermore, only one eigenvalue is enough for the ASEM to perform well (i.e., ARC-ASEM(m) with m = 1), which is surprising. For more experimental details, please refer to Appendix E. 6 Conclusion We develop the first-order and the second-order truncated secular equations as surrogates to the secular equation with full eigendecomposition in solving the CRS (1). The proposed ASEM is an efficient alternative to existing methods for solving CRS since it reduces the computational cost from O(n3) to O(mn2). Our CRS solvers feature rigorous theoretical error bound, which is related to the amount of eigen information used. We also discuss in detail the implementation of our proposed algorithm ASEM. In particular, we show how only Hessian-vector products are needed, but not matrix inversion. Numerical experiments are conducted to further investigate the properties and performance of the proposed ASEM and corroborate with the theoretical results. From our experiments, we find that the proposed ASEM is more efficient than the state-of-the-art methods on synthetic and CUTEst problems. Acknowledgement We would like to thank the anonymous reviewers and chairs for their helpful comments. Michael K. Ng is supported by Hong Kong Research Grant Council GRF 12300218, 12300519, 17201020, 17300021, C1013-21GF, C7004-21GF and Joint NSFC-RGC N-HKU76921. Man-Chung Yue is supported by the Research Grants Council (RGC) of Hong Kong under the GRF project 15305321.
1. What is the focus of the paper regarding cubic regularization in Newton's method? 2. What are the strengths of the proposed approach, particularly in solving the cubic subproblem efficiently? 3. What are the weaknesses of the paper, especially in terms of the error analysis and tradeoff between approximation and accuracy? 4. Do you have any questions regarding the practical implementation of the proposed method in cubic regularization? 5. Are there any concerns about the potential negative societal impact of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The well-known cubic regularization of Newton's method proposed by Nesterov (2006) requires solving a "cubic subproblem" at each iteration: min x ∈ R n = b T x + 1 2 x T A x + ρ 3 | x | 3 . The cubic subproblem is equivalent to solving a "secular equation", which is a nonlinear equation that depends explicitly on the eigenvalues of A ∈ R n × n . Solvers for this secular equation run in O ( n 3 ) . In this work the authors propose two "approximate" secular equations. Essentially the approximate secular equations are obtained by setting a parameter m and replace all eigenvalues larger than λ m ( A ) in the secular equation with a constant μ ≥ λ m ( A ) . The advantage is that now we only need to compute the first m eigenvalues of A , reducing the computational cost to O ( m n 2 ) . The authors provide error analysis on the difference between the exact solution of the secular equation and the approximate solution. Furthermore, they provide numerical experiments to gauge the error of the approximate secular equation in a number of different scenarios. It is observed that the error depends on the distribution of the eigenvalues of A . Strengths And Weaknesses Strengths The ability to solve cubic subproblems efficiently is very important for the success of cubic regularization. The idea of solving an approximate cubic subproblem instead of an exact one can be useful in specific cases where the large eigenvalues of A is concentrated in a small interval. The error bounds in this paper also become tighter in this regime. Weaknesses The main contribution of this paper is proposing two approximate secular equations and analyzing their errors. However, both the proposed equations and the error analysis are quite obvious and unsurprising. It basically boils down to the following idea: in the equation w ( σ ) = ∑ i = 1 n c i 2 ( λ i + σ ) 2 − σ 2 ρ 2 = 0 , we can pick an index m and replace all eigenvalues greater or equal to λ m ( A ) with a constant μ , resulting in an approximate equation. The larger m is, the smaller the approximation error. When m = n , the method is exact. The error analysis is also based on this idea. The overall theoretical contribution is weak because the tradeoff between m and the accuracy is quite obvious. It is also clear that the resulting error will depend on m , μ and the distribution of the eigenvalues. Using a small m , of course, will increase the error and reduce the running time, which is the gist of this paper. The motivation for this work is that the approximate secular equations can be used to solve the cubic subproblem, resulting in more efficient iterates for cubic regularization of Newton's method. However the experimental section does not contain such an experiment where an actual optimization problem is solved with cubic regularization and the running time is compared with other methods for solving the cubic subproblem, either exactly or approximately. In the end, there is neither a theoretical proof nor a convincing experimental section that demonstrate the advantage of the proposed method when it is actually used in cubic regularization. Questions The paper is clear and easy to understand. My only question is how does the approximate solver work in practice when used in cubic regularization? If the authors can implement their method and compared to state of art algorithms such as Carmon and Duchi 2018, or Cartis et al. 2011 and demonstrate a clear advantage, I would consider increasing my overall evaluation. Limitations Yes. There is no direct negative societal impact.
NIPS
Title A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes Abstract Recently the LARS and LAMB optimizers have been proposed for training neural 1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and 3 have become popular in prominent benchmarks and deep learning libraries. How4 ever, without fair comparisons to standard optimizers, it remains an open question 5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In 6 this work we demonstrate that standard optimization algorithms such as Nesterov 7 momentum and Adam can match or exceed the results of LARS and LAMB at large 8 batch sizes. Our results establish new, stronger baselines for future comparisons 9 at these batch sizes and shed light on the difficulties of comparing optimizers for 10 neural network training more generally. 11 N/A Recently the LARS and LAMB optimizers have been proposed for training neural1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal-2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and3 have become popular in prominent benchmarks and deep learning libraries. How-4 ever, without fair comparisons to standard optimizers, it remains an open question5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In6 this work we demonstrate that standard optimization algorithms such as Nesterov7 momentum and Adam can match or exceed the results of LARS and LAMB at large8 batch sizes. Our results establish new, stronger baselines for future comparisons9 at these batch sizes and shed light on the difficulties of comparing optimizers for10 neural network training more generally.11 1 Introduction12 In recent years, hardware systems employing GPUs and TPUs have enabled neural network training13 programs to process dramatically more data in parallel than ever before. The most popular way to14 exploit these systems is to increase the batch size in the optimization algorithm (i.e. the number15 of training examples processed per training step). On many workloads, modern systems can scale16 to larger batch sizes without significantly increasing the time per step [Jouppi et al., 2017, Wang17 et al., 2019], thus proportionally increasing the number of training examples processed per second.18 If researchers can use this increased throughput to reduce the time required to train each neural19 network, then they should achieve better results by training larger models, using larger datasets, and20 by exploring new ideas more rapidly.21 As the capacity for data parallelism continues to increase, practitioners can take their existing,22 well-tuned training configurations and re-train with larger batch sizes, hoping to achieve the same23 performance in less training time [e.g. Ying et al., 2018]. On an idealized data-parallel system with24 negligible overhead from increasing the batch size, they might hope to achieve perfect scaling, a25 proportional reduction in training time as the batch size increases.26 However, achieving perfect scaling is not always straightforward. Changing the batch size changes27 the training dynamics, requiring the training hyperparameters (e.g. learning rate) to be carefully28 re-tuned in order to maintain the same level of validation performance.1 In addition, smaller batch29 sizes provide implicit regularization from gradient noise that may need to be replaced by other forms30 of regularization when the batch size is increased. Finally, even with perfect tuning, increasing31 1 Although there are heuristics for adjusting the learning rate as the batch size changes, these heuristics inevitably break down sufficiently far from the initial batch size and it is also not clear how to apply them to other training hyperparameters (e.g. momentum). Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. the batch size eventually produces diminishing returns. After a critical batch size, the number of32 training steps cannot be decreased in proportion to the batch size – the number of epochs must33 increase to match the validation performance of the smaller batch size. See Shallue et al. 2019 for a34 survey of the effects of data parallelism on neural network training. Once these effects are taken into35 account, there is no strong evidence that increasing the batch size degrades the maximum achievable36 performance on any workload. At the same time, the ever-increasing capacity for data parallelism37 presents opportunities for new regularization techniques that can replace the gradient noise of smaller38 batch sizes and new optimization algorithms that can extend perfect scaling to larger batch sizes by39 using more sophisticated gradient information [Zhang et al., 2019].40 You et al. [2017] proposed the LARS optimization algorithm in the hope of speeding up neural41 network training by exploiting larger batch sizes. LARS is a variant of stochastic gradient descent42 (SGD) with momentum [Polyak, 1964] that applies layer-wise normalization before applying each43 gradient update. Although it is difficult to draw strong conclusions from the results presented in the44 LARS paper, 2 the MLPerf3 Training benchmark4 adopted LARS as one of two allowed algorithms45 in the closed division for ResNet-50 on ImageNet and it became the de facto standard algorithm for46 that benchmark task. With MLPerf entrants competing to find the fastest-training hyperparameters47 for LARS, the first place submissions in the two most recent MLPerf Training competitions used48 LARS to achieve record training speeds with batch sizes of 32,678 and 65,536, respectively. No49 publications or competitive submissions to MLPerf have attempted to match these results with a50 standard optimizer (e.g. Momentum or Adam). However, MLPerf entrants do not have a strong51 incentive (nor are necessarily permitted by the rules) to explore other algorithms because MLPerf52 Training is a systems benchmark that requires algorithmic equivalence between submissions to make53 fair comparisons. Moreover, since the main justification for LARS is its excellent performance on54 ResNet-50 at large batch sizes, more work is needed to quantify any benefit of LARS over standard55 algorithms at any batch size.56 You et al. [2019] later proposed the LAMB optimizer to speed up pre-training for BERT [Devlin57 et al., 2018] using larger batch sizes after concluding that LARS was not effective across workloads.58 LAMB is a variant of Adam [Kingma and Ba, 2014] that adds a similar layer-wise normalization step59 to LARS. You et al. [2019] used LAMB for BERT pre-training with batch sizes up to 65,536 and60 claimed that Adam cannot match the performance of LAMB beyond batch size 16,384.61 In this paper, we demonstrate that standard optimizers, without any layer-wise normalization tech-62 niques, can match or improve upon the large batch size results used to justify LARS and LAMB. In63 Section 2, we show that Nesterov momentum [Nesterov, 1983] matches the performance of LARS on64 the ResNet-50 benchmark with batch size 32,768. We are the first to match this result with a standard65 optimizer. In Section 3, contradicting the claims in You et al. [2019], we show that Adam obtains66 better BERT pre-training results than LAMB at the largest batch sizes, resulting in better downstream67 performance metrics after fine-tuning.68 In addition, we establish a new state-of-the-art for BERT pretraining speed, reaching an F1 score of69 90.46 in 7,818 steps using Adam at batch size 65,536 (we report training speed in steps because our70 focus is algorithmic efficiency, but since we compare LARS and LAMB to simpler optimizers, fewer71 training steps corresponds to faster wall-time in an optimized implementation – our BERT result72 with Adam also improves upon the wall-time record of LAMB reported in You et al. 2019). Taken73 together, our results establish stronger training speed baselines for these tasks and batch sizes, which74 we hope will assist future work aiming to accelerate training using larger batch sizes.75 In addition to the contributions mentioned above, we demonstrate several key effects that are often76 overlooked by studies aiming to establish the superiority of new optimization algorithms. We show77 that future work must carefully disentangle regularization and optimization effects when comparing a78 new optimizer to baselines. We also report several under-documented details used to generate the79 best LARS and LAMB results, a reminder that future comparisons should document any novel tricks80 and include them in baselines. Finally, our results add to existing evidence in the literature on the81 difficulty of performing independently rigorous hyperparameter tuning for optimizers and baselines.82 2 The modified AlexNet on ImageNet benchmark did not have well-established accuracy targets from prior work and LARS used a more general learning rate schedule than the momentum baseline. For ResNet-50 on ImageNet, LARS achieved sub-par accuracy numbers and was not compared to any other optimizer at the same batch size, leaving open the possibility that a generic optimizer would scale just as well as LARS. 3 MLPerf is a trademark of MLCommons.org. 4 https://mlperf.org/training-overview In particular, we show that the optimal shape of the learning rate schedule is optimizer-dependent (in83 addition to the scale), and that differences in the schedule can dominate optimizer comparisons at84 smaller step budgets and become less important at larger step budgets.85 1.1 Related work86 Shallue et al. [2019] and Zhang et al. [2019] explored the effects of data parallelism on neural network87 training for different optimizers, finding no evidence that larger batch sizes degrade performance88 and demonstrating that different optimizers can achieve perfect scaling up to different critical batch89 sizes. You et al. [2017, 2019] developed the LARS and LAMB optimizers in the hope of speeding up90 training by achieving perfect scaling beyond standard optimizers. Many other recent papers have91 proposed new optimization algorithms for generic batch sizes or larger batch sizes [see Schmidt92 et al., 2020]. Choi et al. [2019] and Schmidt et al. [2020] demonstrated the difficulties with fairly93 comparing optimizers, showing that the hyperparameter tuning protocol is a key determinant of94 optimizer rankings. The MLPerf Training benchmark [Mattson et al., 2019] provides a competitive95 ranking of neural network training systems, but does not shed much light on the relative performance96 of optimizers because entrants are limited in the algorithms they can use and the hyperparameters97 they can tune.98 2 Matching LARS on ImageNet99 The MLPerf training benchmark for ResNet-50 v1.5 on ImageNet [Mattson et al., 2019] aims to100 reach 75.9% validation accuracy in the shortest possible wall-clock time. In the closed division of101 the competition, entrants must choose between two optimizers, SGD with momentum or LARS, and102 are only allowed to tune a specified subset of the optimization hyperparameters, with the remaining103 hyperparameter values set by the competition rules.5 The winning entries in the two most recent104 competitions used LARS with batch size 32,768 for 72 training epochs6 and LARS with batch size105 65,536 for 88 training epochs,7 respectively. Kumar et al. [2019] later improved the training time106 for batch size 32,768 by reaching the target accuracy in 64 epochs. These are currently the fastest107 published results on the ResNet-50 benchmark. However, it has been unclear whether LARS was108 necessary to achieve these training speeds since no recent published results or competitive MLPerf109 submissions have used another optimizer. In this section, we describe how we matched the 64 epoch,110 32,768 batch size result of LARS using standard Nesterov momentum.8111 A fair benchmark of training algorithms or hardware systems must account for stochasticity in112 individual training runs. In the MLPerf competition, the benchmark metric is the mean wall-clock113 time of 5 trials after the fastest and slowest trials are excluded. Only 4 out of the 5 trials need to reach114 the target accuracy and there is no explicit limit on the number of times an entrant can try a different115 set of 5 trials. Since our goal is to compare algorithms, rather than systems, we aim to match the116 LARS result in terms of training steps instead (but since Nesterov momentum is computationally117 simpler than LARS, this would also correspond to faster wall-clock time on an optimized system).118 Specifically, we measure the median validation accuracy over 50 training runs with a fixed budget of119 2,512 training steps9 at a batch size of 32,768. When we ran the published LARS training pipeline,10120 LARS achieved a median accuracy of 75.97% and reached the target in 35 out of 50 trials. We121 consider the LARS result to be matched by another optimizer if the median over 50 trials exceeds the122 target of 75.9%.123 2.1 Nesterov momentum at batch size 32k124 This section describes how we used the standard Nesterov momentum optimizer to train the ResNet-125 50 v1.5 on ImageNet to 75.9% validation accuracy in 2,512 update steps at a batch size of 32,768,126 matching the best published LARS result at this batch size. Although we implemented our own127 training program, the only logical changes we made to the published LARS pipeline were to the128 optimizer and the optimization hyperparameters. Our model implementation and data pre-processing129 pipeline were identical to those required under the MLPerf closed division rules (see Appendix B).130 5 https://git.io/JtknD 6 https://mlperf.org/training-results-0-6 7 https://mlperf.org/training-results-0-7 8 The 88 epoch, 65,536 batch size result is faster in terms of wall-clock time but requires more training epochs, indicating that it is beyond LARS’s perfect scaling regime. Although LARS obtains diminishing returns when increasing the batch size from 32,768 to 65,536, future work could investigate whether Nesterov momentum drops off more or less rapidly than LARS. 9 Corresponding to 64 training epochs in Kumar et al. [2019]. 10 https://git.io/JtsLQ We present two Nesterov momentum hyperparameter configurations that achieve comparable per-131 formance to LARS. Configuration A achieved a median accuracy of 75.97% (the same as LARS)132 and reached the target accuracy in 34 out of 50 trials. Configuration B is a modified version of133 Configuration A designed to make as few changes as possible to the LARS hyperparameters; it134 achieved a median accuracy of 75.92% and reached the target in 29 out of 50 trials. See Appendix D.1135 for the complete hyperparameter configurations.136 To achieve these results, we tuned the hyperparameters of the training pipeline from scratch using137 Nesterov momentum. We ran a series of experiments, each of which searched over a hand-designed138 hyperparameter search space using quasi-random search [Bousquet et al., 2017]. Between each139 experiment, we modified the previous search space and/or tweaked the training program to include140 optimization tricks and non-default hyperparameter values we discovered in the state-of-the-art LARS141 pipeline. The full sequence of experiments we ran, including the number of trials, hyperparameters142 tuned, and search space ranges, are provided in Appendix D.4. Once we had matched the LARS143 result with Configuration A, we tried setting each hyperparameter to its value in the LARS pipeline in144 order to find the minimal set of changes that still achieved the target result, producing Configuration145 B. The remainder of this section describes the hyperparameters we tuned and the techniques we146 applied on the journey to these results.147 2.1.1 Nesterov Momentum Optimizer148 Nesterov momentum is a variant of classical or “heavy-ball” momentum defined by the update rule149 vt+1 = µvt +∇`(θt), θt+1 = θt − ηt (µvt+1 +∇`(θt)) , where v0 = 0, θt is the vector of model parameters after t steps, ∇`(θt) is the gradient of the loss150 function `(θ) averaged over a batch of training examples, µ is the momentum, and ηt is the learning151 rate for step t. We prefer Nesterov momentum over classical momentum because it tolerates larger152 values of its momentum parameter [Sutskever et al., 2013] and sometimes outperforms classical153 momentum, although the two algorithms perform similarly on many tasks [Shallue et al., 2019, Choi154 et al., 2019]. We tuned the Nesterov momentum µ in Configurations A and B. We discuss the learning155 rate schedule {ηt} separately in Section 2.1.4.156 2.1.2 Batch normalization157 The ResNet-50 v1.5 model uses batch normalization [Ioffe and Szegedy, 2015], defined as158 BN(x(l)) = ( x(l) − mean(x(l))√ var(x(l)) + ) × γ(l) + β(l), where x(l) is a vector of pre-normalization outputs from layer l, mean(·) and var(·) denote the159 element-wise sample mean and variance across the batch of training examples,11 and γ(l) and β(l)160 are trainable model parameters.161 Batch normalization introduces the following tuneable hyperparameters: , the small constant added162 to the sample variance; the initial values of γ(l) and β(l); and ρ, which governs the exponential163 moving averages of the scaling factors used in evaluation. The LARS pipeline uses = 10−5 and164 ρ = 0.9. It sets the initial value of β(l) to 0.0 everywhere, but the initial value of γ(l) depends on165 the layer: it sets γ(l) to 0.0 in the final batch normalization layer of each residual block, and to 1.0166 everywhere else. In Configuration A, we tuned , ρ, and γ0, the initial value of γ(l) in the final batch167 normalization layer of each residual block. In Configuration B, we used the same values as LARS for168 and ρ, but we found that choosing γ0 between 0.0 and 1.0 was important for matching the LARS169 result with Nesterov momentum.170 2.1.3 Regularization171 In Configuration A, we tuned both the L2 regularization coefficient λ and label smoothing172 coefficient τ [Szegedy et al., 2016]. The LARS pipeline uses λ = 10−4 and τ = 0.1.173 11 In a distributed training environment the mean and variance are commonly computed over a subset of the full batch. The LARS pipeline uses a “virtual batch size” of 64, which we also use to avoid changing the training objective [Hoffer et al., 2017]. Crucially, the LARS pipeline does not apply L2 regularization to the bias variables of the174 ResNet model nor the batch normalization parameters γ(l) and β(l) (indeed, the published175 LARS pipeline does not even apply LARS to these parameters – it uses Heavy-ball momen-176 tum). This detail is extremely important for both LARS and Nesterov momentum to achieve177 the fastest training speed. Configuration B used the same λ and τ as Configuration A.178 179 2.1.4 Learning rate schedule180 The LARS pipeline uses a piecewise polynomial schedule181 ηt = ηinit + (ηpeak − ηinit) ( t twarmup )pwarmup , t ≤ twarmup ηfinal + (ηpeak − ηfinal) ( T−t T−twarmup )pdecay t > twarmup, with ηinit = 0.0, ηpeak = 29.0, ηfinal = 10−4, pwarmup = 1,182 pdecay = 2, and twarmup = 706 steps. In Configuration A, we re-183 tuned all of these hyperparameters with Nesterov momentum.184 In Configuration B, we set ηinit, pdecay, and twarmup to the same185 values as LARS, changing only pwarmup from 1 to 2 and re-186 scaling ηpeak and ηfinal.187 2.1.5 Comparing Nesterov momentum and LARS188 Table 1 shows the hyperparameter values for Configuration B that differ from the state-189 of-the-art LARS pipeline. Aside from re-tuning the momentum, learning rate scale, and190 regularization hyperparameters (whose optimal values are all expected to change with the191 optimizer), the only changes are setting pwarmup to 2 instead of 1 and re-tuning γ0.192 193 Figure 1 shows the LARS learning rate schedule com-194 pared to the Nesterov momentum schedule. Even though195 these schedules are similar, we found that each optimizer196 had a different optimal value of the warmup polynomial197 power. As Table 2 shows, Nesterov momentum performs198 better with pwarmup = 2 instead of 1, while the opposite199 is true with LARS. As discussed in Agarwal et al. [2020],200 optimizers can induce implicit step size schedules that201 strongly influence their training dynamics and solution202 quality, and it appears from Table 2 that the implicit step203 sizes of Nesterov momentum and LARS may evolve dif-204 ferently, causing the shapes of their optimal learning rate205 schedules to differ.206 Although the main concern of a practitioner is validation performance, the primary task of an207 optimization algorithm is to minimize training loss. Table 2 shows that Nesterov momentum achieves208 higher training accuracy than LARS, despite similar validation performance. Thus, it may be more209 appropriate to consider the layerwise normalization of LARS to be a regularization technique, rather210 than an optimization technique.211 Spending even more effort tuning LARS or Nesterov momentum would likely further improve the212 current state-of-the-art for that optimizer. Meaningful optimizer comparisons are only possible213 with independent and equally intensive tuning efforts, and we do not claim that either optimizer214 outperforms the other on this benchmark. That said, if the main evidence for LARS’s utility as a215 “large-batch optimizer” is its performance on this particular benchmark, then more evidence is needed216 to quantify any benefit it has over traditional, generic optimizers like Nesterov momentum.217 2.2 Lessons learned218 In hindsight, it was only necessary to make a few changes to the LARS pipeline to match its219 performance at batch size 32,768 with Nesterov momentum. However, Table 1 does not accurately220 represent the effort required when attempting to match a highly tuned training-speed benchmark.221 Firstly, as described in Sections 2.1.2 and 2.1.3, the strong results of LARS depend partly on a few222 subtle optimization tricks and non-default values of uncommonly-tuned hyperparameters. Fortunately,223 in this case we could discover these tricks by examining the open-source code required for MLPerf224 submissions, but machine learning research papers do not always report these important details.225 Researchers can easily waste a lot of experiments and produce misleading results before getting all of226 these details right. We demonstrate the importance of adding these tricks to our Nesterov momentum227 pipeline in Appendix C; without these tricks (or some new tricks), we likely would not have been228 able to match the LARS performance.229 Secondly, the learning rate schedule really matters when trying to maximize performance with a230 relatively small step budget. Both LARS and Nesterov momentum are sensitive to small deviations231 from the optimized learning rate schedules in Figure 1, and neither schedule works as well for the232 other optimizer. Although relatively minor changes were sufficient to match LARS with Nesterov233 momentum, there is no way to know a priori how the optimal schedule will look for a new optimizer234 Wu et al. [2018]. Even in toy settings where the optimal learning rate schedule can be derived, it235 does not fit into commonly used schedule families and depends strongly on the optimizer Zhang236 et al. [2019]. Indeed, this problem applies to the other optimization hyperparameters as well: it237 is extremely difficult to know which are worth considering ahead of time. Finally, even when we238 narrowed down our hyperparemeter search spaces around the optimal point, the volume of our search239 spaces corresponding to near-peak performance was small, likely due to the small step budget [Shallue240 et al., 2019]. We investigate how these effects change with a less stringent step budget in Section 4.241 3 Stronger BERT pretraining speed baselines242 You et al. [2019] developed the LAMB optimizer in the hope of speeding up training for BERT-Large243 [Bidirectional Encoder Representations from Transformers, Devlin et al., 2018]. BERT training244 consists of two phases. The “pretraining” phase has two objectives: (1) predicting masked tokens245 based on the rest of the sequence (a masked language model), and (2) predicting whether two246 given sentences follow one from another. Finally, the “fine-tuning” phase refines the model for a247 downstream task of interest. BERT pretraining takes a considerable amount of time (up to 3 days on248 16 Cloud TPU-v3 chips Jouppi et al. [2017]), whereas the fine-tuning phase is typically much faster.249 Model quality is typically assessed on the downstream metrics, not on pretraining loss, making BERT250 training a somewhat awkward benchmark for optimization research.251 You et al. [2019] used LAMB for BERT pretraining with batch sizes up to 65,536 and claimed that252 LAMB outperforms Adam batch size 16,384 and beyond. The LAMB optimizer has since appeared253 in several NLP toolkits, including as Microsoft DeepSpeed and NVIDIA Multi-node BERT training,254 and as a benchmark task in MLPerf v0.7.12255 As shown in Table 3, we trained Adam (with decoupled weight decay) baselines that achieve better256 results than both the LAMB and Adam results reported in You et al. [2019]. Our new Adam257 baselines obtain better F1 scores on the development set of the SQuaD v1.1 task in the same number258 of training steps as LAMB for both batch size 32,768 and the hybrid 65,536-then-32,768 batch259 size training regime in You et al. [2019]. We also ran Adam at batch size 65,536 to reach nearly260 the same F1 score as the hybrid batch size LAMB result, but in much fewer training steps. We261 believe 7,818 steps is a new state-of-the-art for BERT pretraining speed [in our experiments, it262 also improves upon the 76-minute record claimed in You et al., 2019]. Additionally, at batch263 size 32,768 our Adam baseline got a better pretraining loss of 1.277 compared to LAMB’s 1.342.264 12 We do not consider the MLPerf task in this paper since it is a warm-start, partial training task. 265 We used the same experimental setup as You266 et al. [2019], including two pretraining phases267 with max sequence lengths of 128 and then 512.268 In order to match You et al. [2019], we reported269 the F1 score on the downstream SQuaD v1.1270 task as the target metric, although this metric271 introduces potential confounds: optimization272 efficiency should be measured on the training273 task using training and held-out data sets. Fortunately, in this case better pretraining performance274 correlated a with higher F1 score after fine-tuning. See Appendix B.2 for additional experiment275 details. We tuned Adam hyperparameters independently for each pretraining phase, specifically276 learning rate η, β1, β2, the polynomial power for the learning rate warmup pwarmup, and weight277 decay λ, using quasi-random search [Bousquet et al., 2017]. See Appendix D.2 for the search spaces.278 In addition to hyperparmeter tuning, our improved Adam results at these batch sizes are also likely279 due to two implementation differences. First, the Adam implementation in You et al. [2019] comes280 from the BERT open source code base, in which Adam is missing the standard bias correction.13281 The Adam bias correction acts as an additional step size warm-up, thereby potentially improving the282 stability in the initial steps of training. Second, the BERT learning rate schedule had a discontinuity283 at the start of the decay phase due to the learning rate decay being incorrectly applied during warm-up284 14 (see Figure 2 in Appendix B). This peculiarity is part of the official BERT release and is present in285 3000+ copies of the BERT Training code on GitHub.286 4 Investigating a less stringent step budget287 Part of what makes comparing optimizers so difficult is that the hyperparameter tuning tends to288 dominate the comparisons [Choi et al., 2019]. Moreover, tuning becomes especially difficult when289 we demand a fixed epoch budget even when dramatically increasing the batch size [Shallue et al.,290 2019]. Fixing the epoch budget as the batch size increases is equivalent to demanding perfect scaling291 (i.e. that the number of training steps decreases by the same factor that the batch size is increased).292 We can view the role of hyperparameter tuning for large batch training as resisting the inevitable end293 of perfect scaling. For example, it might be possible to extend perfect scaling using delicately tuned294 learning rate schedules, but comparing optimizers under these conditions can make the learning rate295 schedule dominate the comparison by favoring some algorithms over others. Therefore, in order to296 better understand the behavior of LARS and LAMB compared to Nesterov Momentum and Adam, we297 ran additional ResNet-50 experiments with a more generous 6,000 step budget (vs 2,512 in Section 2)298 and a more simplistic cosine learning rate schedule. At batch size 32,768, this budget should let us299 reach better validation accuracy than the MLPerf target of 75.9%.300 Although not mentioned in You et al. [2017], the state-of-the-art MLPerf pipeline for “LARS” actually301 uses both LARS and Heavy-ball Momentum, with Momentum applied to the batch normalization and302 ResNet bias parameters and LARS applied to the other parameters. You et al. [2019] does not mention303 whether LAMB was only applied to some parameters and not others. If layerwise normalization can304 be harmful for some model parameters, this is critical information for practitioners using LARS or305 LAMB, since it might not be obvious which optimizer to apply to which parameters. To investigate306 this, we trained both pure LARS and LAMB configurations, as well as configurations that did not307 apply layerwise normalization to the batch normalization and ResNet bias parameters. Moreover,308 LAMB’s underlying Adam implementation defaults to = 10−6, rather than the typical 10−7 or309 10−8. In some cases, can be a critical hyperparameter for Adam [Choi et al., 2019], so we included310 Adam configurations with both = 10−6 and = 10−8.311 Table 4 shows the validation accuracy of these different configurations after training for 6,000312 steps with batch size 32,768. In every case, we used a simple cosine decay learning rate sched-313 ule and tuned the initial learning rate and weight decay using quasi-random search. We used314 momentum parameters of 0.98 for Nesterov momentum and 0.929 for LARS, respectively, based315 on the tuned values from Section 2. We used default hyperparameters for Adam and LAMB316 except where specified. We set all other hyperparameters to the same values as the state-of-the-317 art LARS pipeline, except we set γ0 = 1.0. See Appendix D.3 for more details. As expected,318 13 https://git.io/JtY8d 14 See https://git.io/JtnQW and https://git.io/JtnQ8. highly tuned learning rate schedules and optimizer hyperparameters are no longer necessary with319 a less stringent step budget. Multiple optimizer configurations in Table 4 exceed the MLPerf320 target accuracy of 75.9% at batch size 32,768 with minimal tuning. Training with larger batch321 sizes is not fundamentally unstable: stringent step budgets make hyperparameter tuning trickier.322 LAMB, are introduced alongside claims that337 the new optimizer does not require any—or at338 least minimal—tuning. Unfortunately, these339 claims require a lot of work to support, since340 they require trying the optimizer on new prob-341 lems without using those problems during the342 development of the algorithm. Although our ex-343 periments here are not sufficient to determine344 which optimizers are easiest to tune, experiments like these that operate outside the regime of highly345 tuned learning rate schedules can serve as a starting point. In this experiment, LARS and LAMB do346 not appear to have an advantage in how easy they are to tune even on a dataset and model that were347 used in the development of both of those algorithms. LAMB is a variant of Adam and performs about348 the same as Adam with the same value of ; LARS is more analogous to Momentum and indeed349 Nesterov momentum and LARS have similar performance.350 5 Discussion351 Our results show that standard, generic optimizers suffice for achieving strong results across batch352 sizes. Therefore, any research program to create new optimizers for training at larger batch sizes353 must start from the fact that Momentum, Adam, and likely other standard methods work fine at batch354 sizes as large as those considered in this paper. The LARS and LAMB update rules have no more355 to do with the batch size (or “large” batches) than the Momentum or Adam update rules. Although356 You et al. [2019] presented convergence rate bounds for LARS and LAMB to support their claims357 of superior performance, we show in Appendix A that Adam satisfies a similar bound to LAMB.358 These bounds all rely on very unrealistic assumptions.15 Most of all, they are loose upper bounds359 on the worst case behavior of the algorithms, not accurate reflections of optimizer performance in360 reality. Whether layer-wise normalization can be useful for optimization or regularization remains an361 open question. However, if LARS and LAMB have any advantage over standard techniques, it is not362 that they work dramatically better on the tasks and batch sizes in You et al. [2017, 2019]. This is363 not to suggest that there is nothing interesting about studying neural network optimization at larger364 batch sizes. For example, as gradient noise decreases, there may be opportunities to harness curvature365 information and extend the region of perfect scaling [Zhang et al., 2019]. However, there is currently366 no evidence that LARS and LAMB scale better than Momentum and Adam.367 Our primary concern in this paper has been matching the state of the art—and establishing new368 baselines—for training speed measurements of the sort used to justify new techniques and algorithms369 for training with larger batch sizes. In contrast, many practitioners are more concerned with obtaining370 the best possible validation error with a somewhat flexible training time budget. Part of the reason371 why matching LARS at batch size 32,768 was non-trivial is because getting state of the art training372 15 All convergence bounds assume no momentum is used, and the Lavg bound for LAMB also assumes β2 = 0, when it is typically 0.999. Additionally, Lavg could still be large if L∞ is large, but we leave an empirical analysis of this to future work. speed requires several tricks and implementation details that are not often discussed. It was not373 obvious to us a priori which ones would prove crucial. These details do not involve changes to the374 optimizer, but they interact with the optimizer in a regime where all hyperparameters need to be well375 tuned to stay competitive, making it necessary to re-tune everything for a new optimizer.376 In neural network optimization research, training loss is rarely discussed in detail and evaluation377 centers on validation/test performance since that is what practitioners care most about. However,378 although we shouldn’t only consider training loss, it is counter-intuitive and counter-productive to379 elide a careful investigation of the actual objective of the optimizer. If a new optimizer achieves better380 test performance, but shows no speedup on training loss, then perhaps it is not a better optimizer so381 much as an indirect regularizer. 16 Indeed, in our experiments we found that Nesterov momentum382 achieves noticeably better training accuracy on ResNet-50 than the LARS configuration we used,383 despite reaching roughly the same validation accuracy. Properly disentangling possible regularization384 benefits from optimization speed-ups is crucial if we are to understand neural network training,385 especially at larger batch sizes where we lose some of the regularization effect of gradient noise.386 Hypothetically, if the primary benefit of a training procedure is regularization, then it would be better387 to compare the method with other regularization baselines than other optimizers.388 Ultimately, we only care about batch size to the extent that higher degrees of data parallelism lead389 to faster training. Training with a larger batch size is a means, not the end goal. New optimizers—390 whether designed for generic batch sizes or larger batch sizes—have the potential to dramatically391 improve algorithmic efficiency across multiple workloads, but our results show that standard opti-392 mizers can match the performance of newer alternatives on the workloads we considered. Indeed,393 despite the legion of new update rule variants being proposed in the literature, standard Adam and394 Momentum remain the workhorses of practitioners and researchers alike, while independent empirical395 comparisons consistently find no clear winner when optimizers are compared across a variety of396 workloads [Schmidt et al., 2020]. Meanwhile, as Choi et al. [2019] and our results underscore,397 comparisons between optimizers crucially depend on the effort spent tuning hyperparameters for each398 optimizer. Given these facts, we should regard with extreme caution studies claiming to show the399 superiority of one particular optimizer over others. Part of the issue stems from current incentives in400 the research community; we overvalue the novelty of new methods and undervalue establishing strong401 baselines to measure progress against. This is particularly problematic in the study of optimizers,402 where the learning rate schedule is arguably more important than the choice of the optimizer update403 rule itself! As our results show, the best learning rate schedule is tightly coupled with the optimizer,404 meaning that tuning the learning rate schedule for a new optimizer will generally favor the new405 optimizer over a baseline unless the schedule of the baseline is afforded the same tuning effort.406 6 Conclusion407 In this work, we demonstrated that standard optimizers, without any layer-wise normalization408 techniques, can match or exceed the large batch size results used to justify LARS and LAMB. Future409 work attempting to argue that a new algorithm is useful by comparing to baseline methods or results,410 including those established in this paper, faces a key challenge in showing that the gains are due to the411 new method and not merely due to better tuning or changes to the training pipeline (e.g. regularization412 tricks). Although gains from tuning will eventually saturate, we can, in principle, always invest more413 effort in tuning and potentially get better results for any optimizer. However, our goal should be414 developing optimizers that work better across many different workloads when taking into account the415 amount of additional tuning they require.416 Moving forward, if we are to reliably make progress we need to rethink how we compare and evaluate417 new optimizers for neural network training. Given how sensitive optimizer performance is to the418 hyperparameter tuning protocol and how difficult it is to quantify hyperparameter tuning effort, we419 can’t expect experiments with self-reported baselines to always lead to fair comparisons. Ideally, new420 training methods would be evaluated in a standardized competitive benchmark, where submitters of421 new optimizers do not have full knowledge of the evaluation workloads. Some efforts in this direction422 have started, for instance the MLCommons Algorithmic Efficiency Working Group17, but more work423 needs to be done to produce incentives for the community to publish well-tuned baselines and to424 reward researchers that conduct the most rigorous empirical comparisons.425 16 Deep learning folk wisdom is that “any method to make training less effective can serve as a regularizer,” whether it is a bug in gradients or a clever algorithm. 17 https://mlcommons.org/en/groups/research-algorithms/ Checklist426 1. For all authors...427 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s428 contributions and scope? [Yes] See Sections 2, 3, 4429 (b) Did you describe the limitations of your work? [Yes] We had a lengthy discussion of430 the limitations and scope of the work in Section 5431 (c) Did you discuss any potential negative societal impacts of your work? [No] We did432 not discuss this in the main text. Our primary contribution is to improve experimental433 protocols for other methodological work, which is so removed from specific machine434 learning applications that it is hard to determine the net impact. That said, more435 effective experimental protocols should lead to more effective science which in turn436 should lead to more effective machine learning applications. Whether this development437 is positive or negative for society will depend on who stands to gain from the use of438 machine learning in future applied contexts. Additionally, although our work should, in439 the long run, save computational resources for individual researchers, in net across the440 community this may or may not produce an aggregate savings because more efficient441 machine learning training, by making larger scale projects more accessible, can lead442 to an increased demand for compute resources [York, 2006], which can have varying443 degrees of negative environmental impacts [Patterson et al., 2021].444 (d) Have you read the ethics review guidelines and ensured that your paper conforms to445 them? [Yes]446 2. If you are including theoretical results...447 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Appendix A448 for a comprehensive description of the problem setting.449 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix A.450 3. If you ran experiments...451 (a) Did you include the code, data, and instructions needed to reproduce the main experi-452 mental results (either in the supplemental material or as a URL)? [No] We will include453 a link to all code and all possible reproducibility instructions after the anonymized454 reviewing period is over.455 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they456 were chosen)? [Yes] We are extremely detailed about our tuning procedures and dataset457 details, see Appendices B, D.458 (c) Did you report error bars (e.g., with respect to the random seed after running experi-459 ments multiple times)? [Yes] While we do not report error bars in the tables in the main460 text, Appendices B.2, C contains box plots showing the quartiles of the distribution461 over random seeds.462 (d) Did you include the total amount of compute and the type of resources used (e.g., type463 of GPUs, internal cluster, or cloud provider)? [No] In Appendix B we state that we464 run on Google TPUs, however we do not tally up the total number of experiments run465 (although an interested reader could compute it from the information we provided in466 our detailed appendices given that we list all intermediate experiments, no matter how467 silly in hindsight).468 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...469 (a) If your work uses existing assets, did you cite the creators? [Yes] We reference the470 relevant citations for all models, datasets, and techniques.471 (b) Did you mention the license of the assets? [No]472 (c) Did you include any new assets either in the supplemental material or as a URL? [No]473 (d) Did you discuss whether and how consent was obtained from people whose data you’re474 using/curating? [N/A]475 (e) Did you discuss whether the data you are using/curating contains personally identifiable476 information or offensive content? [N/A]477 5. If you used crowdsourcing or conducted research with human subjects...478 (a) Did you include the full text of instructions given to participants and screenshots, if479 applicable? [N/A]480 (b) Did you describe any potential participant risks, with links to Institutional Review481 Board (IRB) approvals, if applicable? [N/A]482 (c) Did you include the estimated hourly wage paid to participants and the total amount483 spent on participant compensation? [N/A]484 References485 Martı́n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.486 Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew487 Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath488 Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,489 Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent490 Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg,491 Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on492 heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from493 tensorflow.org.494 Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, and Cyril Zhang. Disentangling adaptive495 gradient methods from learning rates. arXiv preprint arXiv:2002.11803, 2020.496 Olivier Bousquet, Sylvain Gelly, Karol Kurach, Olivier Teytaud, and Damien Vincent. Critical hyper-497 parameters: No random, no cry. arXiv, 2017. URL https://arxiv.org/abs/1706.03200.498 James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal499 Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and500 Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL501 http://github.com/google/jax.502 Dami Choi, Christopher J Shallue, Zachary Nado, Jaehoon Lee, Chris J Maddison, and George E503 Dahl. On empirical comparisons of optimizers for deep learning. arXiv preprint arXiv:1910.05446,504 2019.505 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep506 bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.507 Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the gen-508 eralization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741,509 2017.510 Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by511 reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.512 Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa,513 Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of514 a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer515 Architecture, pages 1–12, 2017.516 Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint517 arXiv:1412.6980, 2014.518 Sameer Kumar, Victor Bitorff, Dehao Chen, Chiachen Chou, Blake Hechtman, HyoukJoong Lee,519 Naveen Kumar, Peter Mattson, Shibo Wang, Tao Wang, et al. Scale mlperf-0.6 models on google520 tpu-v3 pods. arXiv preprint arXiv:1909.09756, 2019.521 Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson,522 Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debojy-523 oti Dutta, Udit Gupta, Kim Hazelwood, Andrew Hock, Xinyuan Huang, Atsushi Ike, Bill Jia,524 Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Guokai Ma, Deepak Narayanan, Tayo525 Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St.526 John, Tsuguchika Tabaru, Carole-Jean Wu, Lingjie Xu, Masafumi Yamazaki, Cliff Young, and527 Matei Zaharia. MLPerf training benchmark. arXiv preprint arXiv:1910.01500, 2019. URL528 https://arxiv.org/abs/1910.01500.529 Yurii E Nesterov. A method for solving the convex programming problem with convergence rate530 O(1/kˆ2). In Dokl. akad. nauk Sssr, volume 269, pages 543–547, 1983.531 David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild,532 David So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training. arXiv533 preprint arXiv:2104.10350, 2021.534 Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR535 Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.536 Robin M Schmidt, Frank Schneider, and Philipp Hennig. Descending through a crowded valley–537 benchmarking deep learning optimizers. arXiv preprint arXiv:2007.01547, 2020.538 Christopher J Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and539 George E Dahl. Measuring the effects of data parallelism on neural network training. Journal of540 Machine Learning Research, 20(112):1–49, 2019.541 Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization542 and momentum in deep learning. In ICML, 2013.543 Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking544 the inception architecture for computer vision. In Proceedings of the IEEE conference on computer545 vision and pattern recognition, pages 2818–2826, 2016.546 Yu Emma Wang, Gu-Yeon Wei, and David Brooks. Benchmarking tpu, gpu, and cpu platforms for547 deep learning. arXiv preprint arXiv:1907.10701, 2019.548 Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in549 stochastic meta-optimization. arXiv preprint arXiv:1803.02021, 2018.550 Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, and Youlong Cheng. Image classification at551 supercomputer scale. arXiv preprint arXiv:1811.06992, 2018.552 Richard York. Ecological paradoxes: William stanley jevons and the paperless office. Human Ecology553 Review, pages 143–147, 2006.554 Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv555 preprint arXiv:1708.03888, 2017.556 Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan557 Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep558 learning: Training bert in 76 minutes. In International Conference on Learning Representations,559 2019.560 Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George Dahl, Chris561 Shallue, and Roger B Grosse. Which algorithmic choices matter at which batch sizes? insights562 from a noisy quadratic model. In Advances in Neural Information Processing Systems, pages563 8196–8207, 2019.564 Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and565 Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching566 movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer567 Vision (ICCV), ICCV ’15, page 19–27, USA, 2015. IEEE Computer Society. ISBN 9781467383912.568 doi: 10.1109/ICCV.2015.11. URL https://doi.org/10.1109/ICCV.2015.11.569
1. What is the main contribution of the paper regarding reproducing results from previous papers? 2. What are the strengths and weaknesses of the paper's approach to comparing optimizers? 3. How does the paper highlight the challenges of comparing optimizers, and what lessons does it provide for future researchers? 4. What are some recent works that have explored the same or similar topics as the paper? 5. How does the paper raise an important question about comparing optimizers, but not provide a general method to attack this problem?
Summary Of The Paper Review
Summary Of The Paper The authors detail the significant effort required to reproduce results from the original LARS and LAMB papers. Having reproduced results for LARS (for ResNet-50 training on ImageNet) and LAMB (for BERT pretraining), the authors then show that standard/older optimizers (Nesterov momentum and Adam) can match or even exceed LARS and LAMB at large batch sizes. The authors provide in-depth details on both the difficulties required for them to reproduce the original results (i.e., fixing discrepancies between details in the publications and available source-code from online implementations) and their hyperparameter optimization for Nesterov Momentum and Adam on the datasets evaluated. Also, insights and lessons are provided, both on the importance of hyperparameter tuning for comparison of DL solvers and on practitioners attempting to demonstrate performance gains for a new method (are not merely due to better tuning or regularization tricks). Review The paper is well written and the authors have done an impressive job both recreating the results of the LARS and LAMB papers (including all necessary hyperparameters and implementation tricks) and showing that Adam and Nesterov Momentum may be used in place of LARS and LAMB for the two tasks presented. Furthermore, the lessons detailed are important and the paper is a valuable addition to researchers putting together new baselines. I spent a long time combing through the paper. On the one hand, this is an unfiltered account of the struggles researchers often encounter when attempting to reproduce state-of-the-art (including the extreme detective work necessary to find the undocumented tricks necessary to achieve some results). However, in the end, I did not provide a higher score due to the lack of novel contributions and the narrow scope of the paper. For the former, there are no new methods presented (or novel applications), and the lessons gleaned-e.g., optimizer-performance depending on the learning rate schedule and the effect of large batch sizes on DL training schedules-have been extensively explored in numerous previous works (see below for recent examples). For the latter, the paper narrowly focuses on the results in previously published papers, rather than, for instance, showing how to optimize existing solvers for general downstream tasks. Furthermore, there is no cohesive method presented with which readers may takeaway to aid them, i.e., how can these lessons help researchers develop new methods to push the state-of-the-art? Ultimately, the paper raises an important question "comparing optimizers is difficult since hyperparameter tuning tends to dominate the comparisons." However, the paper does not provide an answer or general method to attack this difficult problem. Other comments: "In particular, we show that the optimal shape of the learning rate schedule is optimizer-dependent (in addition to the scale), and that differences in the schedule can dominate optimizer comparisons at smaller step budgets and become less important at larger step budgets." <- This has been observed in several previous studies.E.g., see: -Lee, Namhoon, et al. "Understanding the effects of data parallelism and sparsity on neural network training." arXiv preprint arXiv:2003.11316 (2020). -Schmidt, Robin M., Frank Schneider, and Philipp Hennig. "Descending through a crowded valley-benchmarking deep learning optimizers." International Conference on Machine Learning. PMLR, 2021. "they are loose upper bounds on the worst case behavior of the algorithms" <- You have not shown the lack of tightness (i.e., achievability), how have you characterized these bounds as loose? "Although there are heuristics for adjusting the learning rate as the batch size changes, these heuristics inevitably break down sufficiently far from the initial batch size and it is also not clear how to apply them to other training hyperparameters (e.g. momentum)." <- The former claim is too broad; there are importance sampling based approaches which allow automatic learning rate adjustments, e.g.: Johnson, Tyler B., and Carlos Guestrin. "Training deep models faster with robust, approximate importance sampling." Advances in Neural Information Processing Systems 31 (2018): 7265-7275. -Please include more discussion (in the main paper) of the Appendix results displaying LARS performance with and without the "optimization tricks and non-default values of uncommonly-tuned hyperparameters." "our BERT result with Adam also improves upon the wall-time record of LAMB reported in You et al. 2019)" <- I agree that reporting training speed in steps is a fair comparison between different training schedules and solvers. However, due to compute environment issues, comparing wall-times reported in another paper is not a fair comparison (the LAMB experiment from You et al 2019 would have to be repeated on the same compute environment used to run the presented Adam experiments). "Moving forward, if we are to reliably make progress we need to rethink how we compare and evaluate new optimizers for neural network training." <- Once again, the paper focuses on asking this question, rather than providing answers to it. "However, achieving perfect scaling is not always straightforward. Changing the batch size changes the training dynamics, requiring the training hyperparameters (e.g. learning rate) to be carefully re-tuned in order to maintain the same level of validation performance." <- Citation required for this claim "In addition, smaller batch sizes provide implicit regularization from gradient noise that may need to be replaced by other forms of regularization when the batch size is increased." <- Citation required for this claim
NIPS
Title A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes Abstract Recently the LARS and LAMB optimizers have been proposed for training neural 1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and 3 have become popular in prominent benchmarks and deep learning libraries. How4 ever, without fair comparisons to standard optimizers, it remains an open question 5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In 6 this work we demonstrate that standard optimization algorithms such as Nesterov 7 momentum and Adam can match or exceed the results of LARS and LAMB at large 8 batch sizes. Our results establish new, stronger baselines for future comparisons 9 at these batch sizes and shed light on the difficulties of comparing optimizers for 10 neural network training more generally. 11 N/A Recently the LARS and LAMB optimizers have been proposed for training neural1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal-2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and3 have become popular in prominent benchmarks and deep learning libraries. How-4 ever, without fair comparisons to standard optimizers, it remains an open question5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In6 this work we demonstrate that standard optimization algorithms such as Nesterov7 momentum and Adam can match or exceed the results of LARS and LAMB at large8 batch sizes. Our results establish new, stronger baselines for future comparisons9 at these batch sizes and shed light on the difficulties of comparing optimizers for10 neural network training more generally.11 1 Introduction12 In recent years, hardware systems employing GPUs and TPUs have enabled neural network training13 programs to process dramatically more data in parallel than ever before. The most popular way to14 exploit these systems is to increase the batch size in the optimization algorithm (i.e. the number15 of training examples processed per training step). On many workloads, modern systems can scale16 to larger batch sizes without significantly increasing the time per step [Jouppi et al., 2017, Wang17 et al., 2019], thus proportionally increasing the number of training examples processed per second.18 If researchers can use this increased throughput to reduce the time required to train each neural19 network, then they should achieve better results by training larger models, using larger datasets, and20 by exploring new ideas more rapidly.21 As the capacity for data parallelism continues to increase, practitioners can take their existing,22 well-tuned training configurations and re-train with larger batch sizes, hoping to achieve the same23 performance in less training time [e.g. Ying et al., 2018]. On an idealized data-parallel system with24 negligible overhead from increasing the batch size, they might hope to achieve perfect scaling, a25 proportional reduction in training time as the batch size increases.26 However, achieving perfect scaling is not always straightforward. Changing the batch size changes27 the training dynamics, requiring the training hyperparameters (e.g. learning rate) to be carefully28 re-tuned in order to maintain the same level of validation performance.1 In addition, smaller batch29 sizes provide implicit regularization from gradient noise that may need to be replaced by other forms30 of regularization when the batch size is increased. Finally, even with perfect tuning, increasing31 1 Although there are heuristics for adjusting the learning rate as the batch size changes, these heuristics inevitably break down sufficiently far from the initial batch size and it is also not clear how to apply them to other training hyperparameters (e.g. momentum). Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. the batch size eventually produces diminishing returns. After a critical batch size, the number of32 training steps cannot be decreased in proportion to the batch size – the number of epochs must33 increase to match the validation performance of the smaller batch size. See Shallue et al. 2019 for a34 survey of the effects of data parallelism on neural network training. Once these effects are taken into35 account, there is no strong evidence that increasing the batch size degrades the maximum achievable36 performance on any workload. At the same time, the ever-increasing capacity for data parallelism37 presents opportunities for new regularization techniques that can replace the gradient noise of smaller38 batch sizes and new optimization algorithms that can extend perfect scaling to larger batch sizes by39 using more sophisticated gradient information [Zhang et al., 2019].40 You et al. [2017] proposed the LARS optimization algorithm in the hope of speeding up neural41 network training by exploiting larger batch sizes. LARS is a variant of stochastic gradient descent42 (SGD) with momentum [Polyak, 1964] that applies layer-wise normalization before applying each43 gradient update. Although it is difficult to draw strong conclusions from the results presented in the44 LARS paper, 2 the MLPerf3 Training benchmark4 adopted LARS as one of two allowed algorithms45 in the closed division for ResNet-50 on ImageNet and it became the de facto standard algorithm for46 that benchmark task. With MLPerf entrants competing to find the fastest-training hyperparameters47 for LARS, the first place submissions in the two most recent MLPerf Training competitions used48 LARS to achieve record training speeds with batch sizes of 32,678 and 65,536, respectively. No49 publications or competitive submissions to MLPerf have attempted to match these results with a50 standard optimizer (e.g. Momentum or Adam). However, MLPerf entrants do not have a strong51 incentive (nor are necessarily permitted by the rules) to explore other algorithms because MLPerf52 Training is a systems benchmark that requires algorithmic equivalence between submissions to make53 fair comparisons. Moreover, since the main justification for LARS is its excellent performance on54 ResNet-50 at large batch sizes, more work is needed to quantify any benefit of LARS over standard55 algorithms at any batch size.56 You et al. [2019] later proposed the LAMB optimizer to speed up pre-training for BERT [Devlin57 et al., 2018] using larger batch sizes after concluding that LARS was not effective across workloads.58 LAMB is a variant of Adam [Kingma and Ba, 2014] that adds a similar layer-wise normalization step59 to LARS. You et al. [2019] used LAMB for BERT pre-training with batch sizes up to 65,536 and60 claimed that Adam cannot match the performance of LAMB beyond batch size 16,384.61 In this paper, we demonstrate that standard optimizers, without any layer-wise normalization tech-62 niques, can match or improve upon the large batch size results used to justify LARS and LAMB. In63 Section 2, we show that Nesterov momentum [Nesterov, 1983] matches the performance of LARS on64 the ResNet-50 benchmark with batch size 32,768. We are the first to match this result with a standard65 optimizer. In Section 3, contradicting the claims in You et al. [2019], we show that Adam obtains66 better BERT pre-training results than LAMB at the largest batch sizes, resulting in better downstream67 performance metrics after fine-tuning.68 In addition, we establish a new state-of-the-art for BERT pretraining speed, reaching an F1 score of69 90.46 in 7,818 steps using Adam at batch size 65,536 (we report training speed in steps because our70 focus is algorithmic efficiency, but since we compare LARS and LAMB to simpler optimizers, fewer71 training steps corresponds to faster wall-time in an optimized implementation – our BERT result72 with Adam also improves upon the wall-time record of LAMB reported in You et al. 2019). Taken73 together, our results establish stronger training speed baselines for these tasks and batch sizes, which74 we hope will assist future work aiming to accelerate training using larger batch sizes.75 In addition to the contributions mentioned above, we demonstrate several key effects that are often76 overlooked by studies aiming to establish the superiority of new optimization algorithms. We show77 that future work must carefully disentangle regularization and optimization effects when comparing a78 new optimizer to baselines. We also report several under-documented details used to generate the79 best LARS and LAMB results, a reminder that future comparisons should document any novel tricks80 and include them in baselines. Finally, our results add to existing evidence in the literature on the81 difficulty of performing independently rigorous hyperparameter tuning for optimizers and baselines.82 2 The modified AlexNet on ImageNet benchmark did not have well-established accuracy targets from prior work and LARS used a more general learning rate schedule than the momentum baseline. For ResNet-50 on ImageNet, LARS achieved sub-par accuracy numbers and was not compared to any other optimizer at the same batch size, leaving open the possibility that a generic optimizer would scale just as well as LARS. 3 MLPerf is a trademark of MLCommons.org. 4 https://mlperf.org/training-overview In particular, we show that the optimal shape of the learning rate schedule is optimizer-dependent (in83 addition to the scale), and that differences in the schedule can dominate optimizer comparisons at84 smaller step budgets and become less important at larger step budgets.85 1.1 Related work86 Shallue et al. [2019] and Zhang et al. [2019] explored the effects of data parallelism on neural network87 training for different optimizers, finding no evidence that larger batch sizes degrade performance88 and demonstrating that different optimizers can achieve perfect scaling up to different critical batch89 sizes. You et al. [2017, 2019] developed the LARS and LAMB optimizers in the hope of speeding up90 training by achieving perfect scaling beyond standard optimizers. Many other recent papers have91 proposed new optimization algorithms for generic batch sizes or larger batch sizes [see Schmidt92 et al., 2020]. Choi et al. [2019] and Schmidt et al. [2020] demonstrated the difficulties with fairly93 comparing optimizers, showing that the hyperparameter tuning protocol is a key determinant of94 optimizer rankings. The MLPerf Training benchmark [Mattson et al., 2019] provides a competitive95 ranking of neural network training systems, but does not shed much light on the relative performance96 of optimizers because entrants are limited in the algorithms they can use and the hyperparameters97 they can tune.98 2 Matching LARS on ImageNet99 The MLPerf training benchmark for ResNet-50 v1.5 on ImageNet [Mattson et al., 2019] aims to100 reach 75.9% validation accuracy in the shortest possible wall-clock time. In the closed division of101 the competition, entrants must choose between two optimizers, SGD with momentum or LARS, and102 are only allowed to tune a specified subset of the optimization hyperparameters, with the remaining103 hyperparameter values set by the competition rules.5 The winning entries in the two most recent104 competitions used LARS with batch size 32,768 for 72 training epochs6 and LARS with batch size105 65,536 for 88 training epochs,7 respectively. Kumar et al. [2019] later improved the training time106 for batch size 32,768 by reaching the target accuracy in 64 epochs. These are currently the fastest107 published results on the ResNet-50 benchmark. However, it has been unclear whether LARS was108 necessary to achieve these training speeds since no recent published results or competitive MLPerf109 submissions have used another optimizer. In this section, we describe how we matched the 64 epoch,110 32,768 batch size result of LARS using standard Nesterov momentum.8111 A fair benchmark of training algorithms or hardware systems must account for stochasticity in112 individual training runs. In the MLPerf competition, the benchmark metric is the mean wall-clock113 time of 5 trials after the fastest and slowest trials are excluded. Only 4 out of the 5 trials need to reach114 the target accuracy and there is no explicit limit on the number of times an entrant can try a different115 set of 5 trials. Since our goal is to compare algorithms, rather than systems, we aim to match the116 LARS result in terms of training steps instead (but since Nesterov momentum is computationally117 simpler than LARS, this would also correspond to faster wall-clock time on an optimized system).118 Specifically, we measure the median validation accuracy over 50 training runs with a fixed budget of119 2,512 training steps9 at a batch size of 32,768. When we ran the published LARS training pipeline,10120 LARS achieved a median accuracy of 75.97% and reached the target in 35 out of 50 trials. We121 consider the LARS result to be matched by another optimizer if the median over 50 trials exceeds the122 target of 75.9%.123 2.1 Nesterov momentum at batch size 32k124 This section describes how we used the standard Nesterov momentum optimizer to train the ResNet-125 50 v1.5 on ImageNet to 75.9% validation accuracy in 2,512 update steps at a batch size of 32,768,126 matching the best published LARS result at this batch size. Although we implemented our own127 training program, the only logical changes we made to the published LARS pipeline were to the128 optimizer and the optimization hyperparameters. Our model implementation and data pre-processing129 pipeline were identical to those required under the MLPerf closed division rules (see Appendix B).130 5 https://git.io/JtknD 6 https://mlperf.org/training-results-0-6 7 https://mlperf.org/training-results-0-7 8 The 88 epoch, 65,536 batch size result is faster in terms of wall-clock time but requires more training epochs, indicating that it is beyond LARS’s perfect scaling regime. Although LARS obtains diminishing returns when increasing the batch size from 32,768 to 65,536, future work could investigate whether Nesterov momentum drops off more or less rapidly than LARS. 9 Corresponding to 64 training epochs in Kumar et al. [2019]. 10 https://git.io/JtsLQ We present two Nesterov momentum hyperparameter configurations that achieve comparable per-131 formance to LARS. Configuration A achieved a median accuracy of 75.97% (the same as LARS)132 and reached the target accuracy in 34 out of 50 trials. Configuration B is a modified version of133 Configuration A designed to make as few changes as possible to the LARS hyperparameters; it134 achieved a median accuracy of 75.92% and reached the target in 29 out of 50 trials. See Appendix D.1135 for the complete hyperparameter configurations.136 To achieve these results, we tuned the hyperparameters of the training pipeline from scratch using137 Nesterov momentum. We ran a series of experiments, each of which searched over a hand-designed138 hyperparameter search space using quasi-random search [Bousquet et al., 2017]. Between each139 experiment, we modified the previous search space and/or tweaked the training program to include140 optimization tricks and non-default hyperparameter values we discovered in the state-of-the-art LARS141 pipeline. The full sequence of experiments we ran, including the number of trials, hyperparameters142 tuned, and search space ranges, are provided in Appendix D.4. Once we had matched the LARS143 result with Configuration A, we tried setting each hyperparameter to its value in the LARS pipeline in144 order to find the minimal set of changes that still achieved the target result, producing Configuration145 B. The remainder of this section describes the hyperparameters we tuned and the techniques we146 applied on the journey to these results.147 2.1.1 Nesterov Momentum Optimizer148 Nesterov momentum is a variant of classical or “heavy-ball” momentum defined by the update rule149 vt+1 = µvt +∇`(θt), θt+1 = θt − ηt (µvt+1 +∇`(θt)) , where v0 = 0, θt is the vector of model parameters after t steps, ∇`(θt) is the gradient of the loss150 function `(θ) averaged over a batch of training examples, µ is the momentum, and ηt is the learning151 rate for step t. We prefer Nesterov momentum over classical momentum because it tolerates larger152 values of its momentum parameter [Sutskever et al., 2013] and sometimes outperforms classical153 momentum, although the two algorithms perform similarly on many tasks [Shallue et al., 2019, Choi154 et al., 2019]. We tuned the Nesterov momentum µ in Configurations A and B. We discuss the learning155 rate schedule {ηt} separately in Section 2.1.4.156 2.1.2 Batch normalization157 The ResNet-50 v1.5 model uses batch normalization [Ioffe and Szegedy, 2015], defined as158 BN(x(l)) = ( x(l) − mean(x(l))√ var(x(l)) + ) × γ(l) + β(l), where x(l) is a vector of pre-normalization outputs from layer l, mean(·) and var(·) denote the159 element-wise sample mean and variance across the batch of training examples,11 and γ(l) and β(l)160 are trainable model parameters.161 Batch normalization introduces the following tuneable hyperparameters: , the small constant added162 to the sample variance; the initial values of γ(l) and β(l); and ρ, which governs the exponential163 moving averages of the scaling factors used in evaluation. The LARS pipeline uses = 10−5 and164 ρ = 0.9. It sets the initial value of β(l) to 0.0 everywhere, but the initial value of γ(l) depends on165 the layer: it sets γ(l) to 0.0 in the final batch normalization layer of each residual block, and to 1.0166 everywhere else. In Configuration A, we tuned , ρ, and γ0, the initial value of γ(l) in the final batch167 normalization layer of each residual block. In Configuration B, we used the same values as LARS for168 and ρ, but we found that choosing γ0 between 0.0 and 1.0 was important for matching the LARS169 result with Nesterov momentum.170 2.1.3 Regularization171 In Configuration A, we tuned both the L2 regularization coefficient λ and label smoothing172 coefficient τ [Szegedy et al., 2016]. The LARS pipeline uses λ = 10−4 and τ = 0.1.173 11 In a distributed training environment the mean and variance are commonly computed over a subset of the full batch. The LARS pipeline uses a “virtual batch size” of 64, which we also use to avoid changing the training objective [Hoffer et al., 2017]. Crucially, the LARS pipeline does not apply L2 regularization to the bias variables of the174 ResNet model nor the batch normalization parameters γ(l) and β(l) (indeed, the published175 LARS pipeline does not even apply LARS to these parameters – it uses Heavy-ball momen-176 tum). This detail is extremely important for both LARS and Nesterov momentum to achieve177 the fastest training speed. Configuration B used the same λ and τ as Configuration A.178 179 2.1.4 Learning rate schedule180 The LARS pipeline uses a piecewise polynomial schedule181 ηt = ηinit + (ηpeak − ηinit) ( t twarmup )pwarmup , t ≤ twarmup ηfinal + (ηpeak − ηfinal) ( T−t T−twarmup )pdecay t > twarmup, with ηinit = 0.0, ηpeak = 29.0, ηfinal = 10−4, pwarmup = 1,182 pdecay = 2, and twarmup = 706 steps. In Configuration A, we re-183 tuned all of these hyperparameters with Nesterov momentum.184 In Configuration B, we set ηinit, pdecay, and twarmup to the same185 values as LARS, changing only pwarmup from 1 to 2 and re-186 scaling ηpeak and ηfinal.187 2.1.5 Comparing Nesterov momentum and LARS188 Table 1 shows the hyperparameter values for Configuration B that differ from the state-189 of-the-art LARS pipeline. Aside from re-tuning the momentum, learning rate scale, and190 regularization hyperparameters (whose optimal values are all expected to change with the191 optimizer), the only changes are setting pwarmup to 2 instead of 1 and re-tuning γ0.192 193 Figure 1 shows the LARS learning rate schedule com-194 pared to the Nesterov momentum schedule. Even though195 these schedules are similar, we found that each optimizer196 had a different optimal value of the warmup polynomial197 power. As Table 2 shows, Nesterov momentum performs198 better with pwarmup = 2 instead of 1, while the opposite199 is true with LARS. As discussed in Agarwal et al. [2020],200 optimizers can induce implicit step size schedules that201 strongly influence their training dynamics and solution202 quality, and it appears from Table 2 that the implicit step203 sizes of Nesterov momentum and LARS may evolve dif-204 ferently, causing the shapes of their optimal learning rate205 schedules to differ.206 Although the main concern of a practitioner is validation performance, the primary task of an207 optimization algorithm is to minimize training loss. Table 2 shows that Nesterov momentum achieves208 higher training accuracy than LARS, despite similar validation performance. Thus, it may be more209 appropriate to consider the layerwise normalization of LARS to be a regularization technique, rather210 than an optimization technique.211 Spending even more effort tuning LARS or Nesterov momentum would likely further improve the212 current state-of-the-art for that optimizer. Meaningful optimizer comparisons are only possible213 with independent and equally intensive tuning efforts, and we do not claim that either optimizer214 outperforms the other on this benchmark. That said, if the main evidence for LARS’s utility as a215 “large-batch optimizer” is its performance on this particular benchmark, then more evidence is needed216 to quantify any benefit it has over traditional, generic optimizers like Nesterov momentum.217 2.2 Lessons learned218 In hindsight, it was only necessary to make a few changes to the LARS pipeline to match its219 performance at batch size 32,768 with Nesterov momentum. However, Table 1 does not accurately220 represent the effort required when attempting to match a highly tuned training-speed benchmark.221 Firstly, as described in Sections 2.1.2 and 2.1.3, the strong results of LARS depend partly on a few222 subtle optimization tricks and non-default values of uncommonly-tuned hyperparameters. Fortunately,223 in this case we could discover these tricks by examining the open-source code required for MLPerf224 submissions, but machine learning research papers do not always report these important details.225 Researchers can easily waste a lot of experiments and produce misleading results before getting all of226 these details right. We demonstrate the importance of adding these tricks to our Nesterov momentum227 pipeline in Appendix C; without these tricks (or some new tricks), we likely would not have been228 able to match the LARS performance.229 Secondly, the learning rate schedule really matters when trying to maximize performance with a230 relatively small step budget. Both LARS and Nesterov momentum are sensitive to small deviations231 from the optimized learning rate schedules in Figure 1, and neither schedule works as well for the232 other optimizer. Although relatively minor changes were sufficient to match LARS with Nesterov233 momentum, there is no way to know a priori how the optimal schedule will look for a new optimizer234 Wu et al. [2018]. Even in toy settings where the optimal learning rate schedule can be derived, it235 does not fit into commonly used schedule families and depends strongly on the optimizer Zhang236 et al. [2019]. Indeed, this problem applies to the other optimization hyperparameters as well: it237 is extremely difficult to know which are worth considering ahead of time. Finally, even when we238 narrowed down our hyperparemeter search spaces around the optimal point, the volume of our search239 spaces corresponding to near-peak performance was small, likely due to the small step budget [Shallue240 et al., 2019]. We investigate how these effects change with a less stringent step budget in Section 4.241 3 Stronger BERT pretraining speed baselines242 You et al. [2019] developed the LAMB optimizer in the hope of speeding up training for BERT-Large243 [Bidirectional Encoder Representations from Transformers, Devlin et al., 2018]. BERT training244 consists of two phases. The “pretraining” phase has two objectives: (1) predicting masked tokens245 based on the rest of the sequence (a masked language model), and (2) predicting whether two246 given sentences follow one from another. Finally, the “fine-tuning” phase refines the model for a247 downstream task of interest. BERT pretraining takes a considerable amount of time (up to 3 days on248 16 Cloud TPU-v3 chips Jouppi et al. [2017]), whereas the fine-tuning phase is typically much faster.249 Model quality is typically assessed on the downstream metrics, not on pretraining loss, making BERT250 training a somewhat awkward benchmark for optimization research.251 You et al. [2019] used LAMB for BERT pretraining with batch sizes up to 65,536 and claimed that252 LAMB outperforms Adam batch size 16,384 and beyond. The LAMB optimizer has since appeared253 in several NLP toolkits, including as Microsoft DeepSpeed and NVIDIA Multi-node BERT training,254 and as a benchmark task in MLPerf v0.7.12255 As shown in Table 3, we trained Adam (with decoupled weight decay) baselines that achieve better256 results than both the LAMB and Adam results reported in You et al. [2019]. Our new Adam257 baselines obtain better F1 scores on the development set of the SQuaD v1.1 task in the same number258 of training steps as LAMB for both batch size 32,768 and the hybrid 65,536-then-32,768 batch259 size training regime in You et al. [2019]. We also ran Adam at batch size 65,536 to reach nearly260 the same F1 score as the hybrid batch size LAMB result, but in much fewer training steps. We261 believe 7,818 steps is a new state-of-the-art for BERT pretraining speed [in our experiments, it262 also improves upon the 76-minute record claimed in You et al., 2019]. Additionally, at batch263 size 32,768 our Adam baseline got a better pretraining loss of 1.277 compared to LAMB’s 1.342.264 12 We do not consider the MLPerf task in this paper since it is a warm-start, partial training task. 265 We used the same experimental setup as You266 et al. [2019], including two pretraining phases267 with max sequence lengths of 128 and then 512.268 In order to match You et al. [2019], we reported269 the F1 score on the downstream SQuaD v1.1270 task as the target metric, although this metric271 introduces potential confounds: optimization272 efficiency should be measured on the training273 task using training and held-out data sets. Fortunately, in this case better pretraining performance274 correlated a with higher F1 score after fine-tuning. See Appendix B.2 for additional experiment275 details. We tuned Adam hyperparameters independently for each pretraining phase, specifically276 learning rate η, β1, β2, the polynomial power for the learning rate warmup pwarmup, and weight277 decay λ, using quasi-random search [Bousquet et al., 2017]. See Appendix D.2 for the search spaces.278 In addition to hyperparmeter tuning, our improved Adam results at these batch sizes are also likely279 due to two implementation differences. First, the Adam implementation in You et al. [2019] comes280 from the BERT open source code base, in which Adam is missing the standard bias correction.13281 The Adam bias correction acts as an additional step size warm-up, thereby potentially improving the282 stability in the initial steps of training. Second, the BERT learning rate schedule had a discontinuity283 at the start of the decay phase due to the learning rate decay being incorrectly applied during warm-up284 14 (see Figure 2 in Appendix B). This peculiarity is part of the official BERT release and is present in285 3000+ copies of the BERT Training code on GitHub.286 4 Investigating a less stringent step budget287 Part of what makes comparing optimizers so difficult is that the hyperparameter tuning tends to288 dominate the comparisons [Choi et al., 2019]. Moreover, tuning becomes especially difficult when289 we demand a fixed epoch budget even when dramatically increasing the batch size [Shallue et al.,290 2019]. Fixing the epoch budget as the batch size increases is equivalent to demanding perfect scaling291 (i.e. that the number of training steps decreases by the same factor that the batch size is increased).292 We can view the role of hyperparameter tuning for large batch training as resisting the inevitable end293 of perfect scaling. For example, it might be possible to extend perfect scaling using delicately tuned294 learning rate schedules, but comparing optimizers under these conditions can make the learning rate295 schedule dominate the comparison by favoring some algorithms over others. Therefore, in order to296 better understand the behavior of LARS and LAMB compared to Nesterov Momentum and Adam, we297 ran additional ResNet-50 experiments with a more generous 6,000 step budget (vs 2,512 in Section 2)298 and a more simplistic cosine learning rate schedule. At batch size 32,768, this budget should let us299 reach better validation accuracy than the MLPerf target of 75.9%.300 Although not mentioned in You et al. [2017], the state-of-the-art MLPerf pipeline for “LARS” actually301 uses both LARS and Heavy-ball Momentum, with Momentum applied to the batch normalization and302 ResNet bias parameters and LARS applied to the other parameters. You et al. [2019] does not mention303 whether LAMB was only applied to some parameters and not others. If layerwise normalization can304 be harmful for some model parameters, this is critical information for practitioners using LARS or305 LAMB, since it might not be obvious which optimizer to apply to which parameters. To investigate306 this, we trained both pure LARS and LAMB configurations, as well as configurations that did not307 apply layerwise normalization to the batch normalization and ResNet bias parameters. Moreover,308 LAMB’s underlying Adam implementation defaults to = 10−6, rather than the typical 10−7 or309 10−8. In some cases, can be a critical hyperparameter for Adam [Choi et al., 2019], so we included310 Adam configurations with both = 10−6 and = 10−8.311 Table 4 shows the validation accuracy of these different configurations after training for 6,000312 steps with batch size 32,768. In every case, we used a simple cosine decay learning rate sched-313 ule and tuned the initial learning rate and weight decay using quasi-random search. We used314 momentum parameters of 0.98 for Nesterov momentum and 0.929 for LARS, respectively, based315 on the tuned values from Section 2. We used default hyperparameters for Adam and LAMB316 except where specified. We set all other hyperparameters to the same values as the state-of-the-317 art LARS pipeline, except we set γ0 = 1.0. See Appendix D.3 for more details. As expected,318 13 https://git.io/JtY8d 14 See https://git.io/JtnQW and https://git.io/JtnQ8. highly tuned learning rate schedules and optimizer hyperparameters are no longer necessary with319 a less stringent step budget. Multiple optimizer configurations in Table 4 exceed the MLPerf320 target accuracy of 75.9% at batch size 32,768 with minimal tuning. Training with larger batch321 sizes is not fundamentally unstable: stringent step budgets make hyperparameter tuning trickier.322 LAMB, are introduced alongside claims that337 the new optimizer does not require any—or at338 least minimal—tuning. Unfortunately, these339 claims require a lot of work to support, since340 they require trying the optimizer on new prob-341 lems without using those problems during the342 development of the algorithm. Although our ex-343 periments here are not sufficient to determine344 which optimizers are easiest to tune, experiments like these that operate outside the regime of highly345 tuned learning rate schedules can serve as a starting point. In this experiment, LARS and LAMB do346 not appear to have an advantage in how easy they are to tune even on a dataset and model that were347 used in the development of both of those algorithms. LAMB is a variant of Adam and performs about348 the same as Adam with the same value of ; LARS is more analogous to Momentum and indeed349 Nesterov momentum and LARS have similar performance.350 5 Discussion351 Our results show that standard, generic optimizers suffice for achieving strong results across batch352 sizes. Therefore, any research program to create new optimizers for training at larger batch sizes353 must start from the fact that Momentum, Adam, and likely other standard methods work fine at batch354 sizes as large as those considered in this paper. The LARS and LAMB update rules have no more355 to do with the batch size (or “large” batches) than the Momentum or Adam update rules. Although356 You et al. [2019] presented convergence rate bounds for LARS and LAMB to support their claims357 of superior performance, we show in Appendix A that Adam satisfies a similar bound to LAMB.358 These bounds all rely on very unrealistic assumptions.15 Most of all, they are loose upper bounds359 on the worst case behavior of the algorithms, not accurate reflections of optimizer performance in360 reality. Whether layer-wise normalization can be useful for optimization or regularization remains an361 open question. However, if LARS and LAMB have any advantage over standard techniques, it is not362 that they work dramatically better on the tasks and batch sizes in You et al. [2017, 2019]. This is363 not to suggest that there is nothing interesting about studying neural network optimization at larger364 batch sizes. For example, as gradient noise decreases, there may be opportunities to harness curvature365 information and extend the region of perfect scaling [Zhang et al., 2019]. However, there is currently366 no evidence that LARS and LAMB scale better than Momentum and Adam.367 Our primary concern in this paper has been matching the state of the art—and establishing new368 baselines—for training speed measurements of the sort used to justify new techniques and algorithms369 for training with larger batch sizes. In contrast, many practitioners are more concerned with obtaining370 the best possible validation error with a somewhat flexible training time budget. Part of the reason371 why matching LARS at batch size 32,768 was non-trivial is because getting state of the art training372 15 All convergence bounds assume no momentum is used, and the Lavg bound for LAMB also assumes β2 = 0, when it is typically 0.999. Additionally, Lavg could still be large if L∞ is large, but we leave an empirical analysis of this to future work. speed requires several tricks and implementation details that are not often discussed. It was not373 obvious to us a priori which ones would prove crucial. These details do not involve changes to the374 optimizer, but they interact with the optimizer in a regime where all hyperparameters need to be well375 tuned to stay competitive, making it necessary to re-tune everything for a new optimizer.376 In neural network optimization research, training loss is rarely discussed in detail and evaluation377 centers on validation/test performance since that is what practitioners care most about. However,378 although we shouldn’t only consider training loss, it is counter-intuitive and counter-productive to379 elide a careful investigation of the actual objective of the optimizer. If a new optimizer achieves better380 test performance, but shows no speedup on training loss, then perhaps it is not a better optimizer so381 much as an indirect regularizer. 16 Indeed, in our experiments we found that Nesterov momentum382 achieves noticeably better training accuracy on ResNet-50 than the LARS configuration we used,383 despite reaching roughly the same validation accuracy. Properly disentangling possible regularization384 benefits from optimization speed-ups is crucial if we are to understand neural network training,385 especially at larger batch sizes where we lose some of the regularization effect of gradient noise.386 Hypothetically, if the primary benefit of a training procedure is regularization, then it would be better387 to compare the method with other regularization baselines than other optimizers.388 Ultimately, we only care about batch size to the extent that higher degrees of data parallelism lead389 to faster training. Training with a larger batch size is a means, not the end goal. New optimizers—390 whether designed for generic batch sizes or larger batch sizes—have the potential to dramatically391 improve algorithmic efficiency across multiple workloads, but our results show that standard opti-392 mizers can match the performance of newer alternatives on the workloads we considered. Indeed,393 despite the legion of new update rule variants being proposed in the literature, standard Adam and394 Momentum remain the workhorses of practitioners and researchers alike, while independent empirical395 comparisons consistently find no clear winner when optimizers are compared across a variety of396 workloads [Schmidt et al., 2020]. Meanwhile, as Choi et al. [2019] and our results underscore,397 comparisons between optimizers crucially depend on the effort spent tuning hyperparameters for each398 optimizer. Given these facts, we should regard with extreme caution studies claiming to show the399 superiority of one particular optimizer over others. Part of the issue stems from current incentives in400 the research community; we overvalue the novelty of new methods and undervalue establishing strong401 baselines to measure progress against. This is particularly problematic in the study of optimizers,402 where the learning rate schedule is arguably more important than the choice of the optimizer update403 rule itself! As our results show, the best learning rate schedule is tightly coupled with the optimizer,404 meaning that tuning the learning rate schedule for a new optimizer will generally favor the new405 optimizer over a baseline unless the schedule of the baseline is afforded the same tuning effort.406 6 Conclusion407 In this work, we demonstrated that standard optimizers, without any layer-wise normalization408 techniques, can match or exceed the large batch size results used to justify LARS and LAMB. Future409 work attempting to argue that a new algorithm is useful by comparing to baseline methods or results,410 including those established in this paper, faces a key challenge in showing that the gains are due to the411 new method and not merely due to better tuning or changes to the training pipeline (e.g. regularization412 tricks). Although gains from tuning will eventually saturate, we can, in principle, always invest more413 effort in tuning and potentially get better results for any optimizer. However, our goal should be414 developing optimizers that work better across many different workloads when taking into account the415 amount of additional tuning they require.416 Moving forward, if we are to reliably make progress we need to rethink how we compare and evaluate417 new optimizers for neural network training. Given how sensitive optimizer performance is to the418 hyperparameter tuning protocol and how difficult it is to quantify hyperparameter tuning effort, we419 can’t expect experiments with self-reported baselines to always lead to fair comparisons. Ideally, new420 training methods would be evaluated in a standardized competitive benchmark, where submitters of421 new optimizers do not have full knowledge of the evaluation workloads. Some efforts in this direction422 have started, for instance the MLCommons Algorithmic Efficiency Working Group17, but more work423 needs to be done to produce incentives for the community to publish well-tuned baselines and to424 reward researchers that conduct the most rigorous empirical comparisons.425 16 Deep learning folk wisdom is that “any method to make training less effective can serve as a regularizer,” whether it is a bug in gradients or a clever algorithm. 17 https://mlcommons.org/en/groups/research-algorithms/ Checklist426 1. For all authors...427 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s428 contributions and scope? [Yes] See Sections 2, 3, 4429 (b) Did you describe the limitations of your work? [Yes] We had a lengthy discussion of430 the limitations and scope of the work in Section 5431 (c) Did you discuss any potential negative societal impacts of your work? [No] We did432 not discuss this in the main text. Our primary contribution is to improve experimental433 protocols for other methodological work, which is so removed from specific machine434 learning applications that it is hard to determine the net impact. That said, more435 effective experimental protocols should lead to more effective science which in turn436 should lead to more effective machine learning applications. Whether this development437 is positive or negative for society will depend on who stands to gain from the use of438 machine learning in future applied contexts. Additionally, although our work should, in439 the long run, save computational resources for individual researchers, in net across the440 community this may or may not produce an aggregate savings because more efficient441 machine learning training, by making larger scale projects more accessible, can lead442 to an increased demand for compute resources [York, 2006], which can have varying443 degrees of negative environmental impacts [Patterson et al., 2021].444 (d) Have you read the ethics review guidelines and ensured that your paper conforms to445 them? [Yes]446 2. If you are including theoretical results...447 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Appendix A448 for a comprehensive description of the problem setting.449 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix A.450 3. If you ran experiments...451 (a) Did you include the code, data, and instructions needed to reproduce the main experi-452 mental results (either in the supplemental material or as a URL)? [No] We will include453 a link to all code and all possible reproducibility instructions after the anonymized454 reviewing period is over.455 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they456 were chosen)? [Yes] We are extremely detailed about our tuning procedures and dataset457 details, see Appendices B, D.458 (c) Did you report error bars (e.g., with respect to the random seed after running experi-459 ments multiple times)? [Yes] While we do not report error bars in the tables in the main460 text, Appendices B.2, C contains box plots showing the quartiles of the distribution461 over random seeds.462 (d) Did you include the total amount of compute and the type of resources used (e.g., type463 of GPUs, internal cluster, or cloud provider)? [No] In Appendix B we state that we464 run on Google TPUs, however we do not tally up the total number of experiments run465 (although an interested reader could compute it from the information we provided in466 our detailed appendices given that we list all intermediate experiments, no matter how467 silly in hindsight).468 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...469 (a) If your work uses existing assets, did you cite the creators? [Yes] We reference the470 relevant citations for all models, datasets, and techniques.471 (b) Did you mention the license of the assets? [No]472 (c) Did you include any new assets either in the supplemental material or as a URL? [No]473 (d) Did you discuss whether and how consent was obtained from people whose data you’re474 using/curating? [N/A]475 (e) Did you discuss whether the data you are using/curating contains personally identifiable476 information or offensive content? [N/A]477 5. If you used crowdsourcing or conducted research with human subjects...478 (a) Did you include the full text of instructions given to participants and screenshots, if479 applicable? [N/A]480 (b) Did you describe any potential participant risks, with links to Institutional Review481 Board (IRB) approvals, if applicable? [N/A]482 (c) Did you include the estimated hourly wage paid to participants and the total amount483 spent on participant compensation? [N/A]484 References485 Martı́n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.486 Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew487 Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath488 Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,489 Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent490 Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg,491 Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on492 heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from493 tensorflow.org.494 Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, and Cyril Zhang. Disentangling adaptive495 gradient methods from learning rates. arXiv preprint arXiv:2002.11803, 2020.496 Olivier Bousquet, Sylvain Gelly, Karol Kurach, Olivier Teytaud, and Damien Vincent. Critical hyper-497 parameters: No random, no cry. arXiv, 2017. URL https://arxiv.org/abs/1706.03200.498 James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal499 Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and500 Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL501 http://github.com/google/jax.502 Dami Choi, Christopher J Shallue, Zachary Nado, Jaehoon Lee, Chris J Maddison, and George E503 Dahl. On empirical comparisons of optimizers for deep learning. arXiv preprint arXiv:1910.05446,504 2019.505 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep506 bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.507 Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the gen-508 eralization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741,509 2017.510 Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by511 reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.512 Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa,513 Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of514 a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer515 Architecture, pages 1–12, 2017.516 Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint517 arXiv:1412.6980, 2014.518 Sameer Kumar, Victor Bitorff, Dehao Chen, Chiachen Chou, Blake Hechtman, HyoukJoong Lee,519 Naveen Kumar, Peter Mattson, Shibo Wang, Tao Wang, et al. Scale mlperf-0.6 models on google520 tpu-v3 pods. arXiv preprint arXiv:1909.09756, 2019.521 Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson,522 Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debojy-523 oti Dutta, Udit Gupta, Kim Hazelwood, Andrew Hock, Xinyuan Huang, Atsushi Ike, Bill Jia,524 Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Guokai Ma, Deepak Narayanan, Tayo525 Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St.526 John, Tsuguchika Tabaru, Carole-Jean Wu, Lingjie Xu, Masafumi Yamazaki, Cliff Young, and527 Matei Zaharia. MLPerf training benchmark. arXiv preprint arXiv:1910.01500, 2019. URL528 https://arxiv.org/abs/1910.01500.529 Yurii E Nesterov. A method for solving the convex programming problem with convergence rate530 O(1/kˆ2). In Dokl. akad. nauk Sssr, volume 269, pages 543–547, 1983.531 David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild,532 David So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training. arXiv533 preprint arXiv:2104.10350, 2021.534 Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR535 Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.536 Robin M Schmidt, Frank Schneider, and Philipp Hennig. Descending through a crowded valley–537 benchmarking deep learning optimizers. arXiv preprint arXiv:2007.01547, 2020.538 Christopher J Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and539 George E Dahl. Measuring the effects of data parallelism on neural network training. Journal of540 Machine Learning Research, 20(112):1–49, 2019.541 Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization542 and momentum in deep learning. In ICML, 2013.543 Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking544 the inception architecture for computer vision. In Proceedings of the IEEE conference on computer545 vision and pattern recognition, pages 2818–2826, 2016.546 Yu Emma Wang, Gu-Yeon Wei, and David Brooks. Benchmarking tpu, gpu, and cpu platforms for547 deep learning. arXiv preprint arXiv:1907.10701, 2019.548 Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in549 stochastic meta-optimization. arXiv preprint arXiv:1803.02021, 2018.550 Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, and Youlong Cheng. Image classification at551 supercomputer scale. arXiv preprint arXiv:1811.06992, 2018.552 Richard York. Ecological paradoxes: William stanley jevons and the paperless office. Human Ecology553 Review, pages 143–147, 2006.554 Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv555 preprint arXiv:1708.03888, 2017.556 Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan557 Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep558 learning: Training bert in 76 minutes. In International Conference on Learning Representations,559 2019.560 Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George Dahl, Chris561 Shallue, and Roger B Grosse. Which algorithmic choices matter at which batch sizes? insights562 from a noisy quadratic model. In Advances in Neural Information Processing Systems, pages563 8196–8207, 2019.564 Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and565 Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching566 movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer567 Vision (ICCV), ICCV ’15, page 19–27, USA, 2015. IEEE Computer Society. ISBN 9781467383912.568 doi: 10.1109/ICCV.2015.11. URL https://doi.org/10.1109/ICCV.2015.11.569
1. What are the strengths and weaknesses of the paper regarding its claims and contributions to the field of large-batch training of neural networks? 2. How does the reviewer assess the presentation and structure of the paper, and what are their suggestions for improvement? 3. What are the open questions and unresolved issues in the paper, particularly regarding the comparison between LARS/LAMB and traditional optimizers like Adam and SGD? 4. How does the reviewer evaluate the novelty and significance of the proposed approach, especially in light of prior work in the field? 5. What are some minor concerns and suggestions for further discussion or analysis that the reviewer has regarding the paper's content?
Summary Of The Paper Review
Summary Of The Paper Authors study the problem of large-batch training of neural networks and claim an important (negative) result: specialized large-batch algorithms are not necessarily better than traditional ones. Authors consider two standard optimizers (Adam and SGD w/ Nesterov momentum) and find hyperparameter configurations under which these standard optimizers match and sometimes outperform LAMB/LARS for large-batch training of ResNet50 and BERT respectively. Review Disclaimer: I am writing this (probably) additional review without reading prior discussion and other reviews. My review may or may not raise questions that were already addressed in prior discussion. This was a difficult paper to review. On one hand, it claims an important negative result that may affect an entire line of prior work. At the current state of the field, we badly need more papers like this to reduce the “research debt”, and hence the complexity of future research in the field. On the other hand, this paper is lacking in the way it supports its main claim, and overall the paper is not refined enough to recommend acceptance to NeurIPS. I would also argue that it does not stand up to the standards that authors themselves suggest in Section 6. For simplicity, I summarize strong and weak points of the paper in two separate sections. Reasons to accept Outstanding technical competence: authors work with highly tuned baselines with great attention to detail; L279-286 is pure brilliance Reporting the search space chronology (Appendix D.4) is a great practice that deserves wider adoption. Presentation-wise, the paper is clearly written and easy to read, even though it follows a somewhat unconventional structure. The only recommendation I can give is to avoid wrapping figures with low-margin text (e.g. Figure 4). That said, this is an optional request that did not affect my score. Reasons to reject My main issue with this paper is that it leaves an open question where it was clearly possible to give an answer with strong evidence. As authors rightfully state in L5-6, “it is an open queston” whether LARS/LAMB has any benefit for large batches. Unfortunately, providing specific configurations of hyperparameters where SGD/Adam outperforms LARS does not make us much closer to answering that question. There is a wide range of prior work [1,2,3] on properly benchmarking optimizers and their sensitivity to hyperparameters. For instance, [1] proposes a general approach that could have been used to test whether or not LAMB requires less hyperparameter tuning than Adam, as authors wonder in L336-340. In fact, it would not require much more compute than what was used for finding the optimal hyperparameters. Some prior work[2] already measured the tuning-aware efficiency of the same optimizers and demonstrates similar issues with LARS/LAMB, although not in a large-batch setting. It comes to the same conclusion that none of the specialized algorithms consistently outperform SGD/Adam. To summarize, this paper does indeed propose novel and potentially important “anecdotal” evidence, but I would expect a more rigorous analysis from a high profile venue such as NeurIPS. Aside from that, the discussion and conclusions made in the paper are insightful, but already well-known: the importance of hyperparameter configuration is better addressed in [1,2,3], the fact that an optimizer is also an indirect regularizer is also known [4]. Questions, minor concerns MLPerf closed division Authors refer to the closed division multiple times; however, there is a separate “open” division in MLPerf where participants can tune the optimizer hyperparameters. Currently, the submissions on that track use either slightly modified LAMB or a completely different approach based on second order optimization[5]. If there is some intuitive explanation as for why these submissions still rely on LAMB? Nesterov momentum As far as I understood, you use nesterov momentum for ResNet while baselines and other experiments use traditional momentum (or Adam’s “momentum” for Adam/LAMB). Based on your reasoning in S2 this may be important. To better convey the importance of this change of this change Does LARS benefit from using Nesterov momentum instead of classical one? (or explain why it is impossible to modify LARS with nesterov momentum) [optional] Does Adam/LAMB benefit from switching to Nadam-like momentum? Hyperparameter stability to batch size Do configurations A/B maintain their advantage over LARS if we apply them to adjacent batch sizes without re-tuning? (e.g. 65536 samples, 16384 samples) Page 1 footnote: "... these heuristics inevitably break down ..." [optional] this would benefit a citation or further discussion Maybe typo: Table 14, 15: t_{decay} is 2.815 is reported with a dot, as if it is approximately 3 steps. Did you mean 2,815? [1] Optimizer Benchmarking Needs to Account for Hyperparameter Tuning. Prabhu Teja Sivaprasad, Florian Mai, Thijs Vogels, Martin Jaggi, François Fleuret. 2019 [2] How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers. Yuanhao Xiong, Xuanqing Liu, Li-Cheng Lan, Yang You, Si Si, Cho-Jui Hsieh. 2020 [3] Descending through a Crowded Valley — Benchmarking Deep Learning Optimizers. Robin M. Schmidt, Frank Schneider, and Philipp Hennig, 2020 [4] Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks. Yuanzhi Li, Colin Wei, Tengyu Ma. 2019 [5] Scalable Second Order Optimization for Deep Learning. Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, Yoram Singer. 2020 p.s. regardless of whether the paper is accepted, I must express my deepest respect for pinpointing the widespread implementation issues with Adam and learning rate decays in BERT.
NIPS
Title A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes Abstract Recently the LARS and LAMB optimizers have been proposed for training neural 1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and 3 have become popular in prominent benchmarks and deep learning libraries. How4 ever, without fair comparisons to standard optimizers, it remains an open question 5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In 6 this work we demonstrate that standard optimization algorithms such as Nesterov 7 momentum and Adam can match or exceed the results of LARS and LAMB at large 8 batch sizes. Our results establish new, stronger baselines for future comparisons 9 at these batch sizes and shed light on the difficulties of comparing optimizers for 10 neural network training more generally. 11 N/A Recently the LARS and LAMB optimizers have been proposed for training neural1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal-2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and3 have become popular in prominent benchmarks and deep learning libraries. How-4 ever, without fair comparisons to standard optimizers, it remains an open question5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In6 this work we demonstrate that standard optimization algorithms such as Nesterov7 momentum and Adam can match or exceed the results of LARS and LAMB at large8 batch sizes. Our results establish new, stronger baselines for future comparisons9 at these batch sizes and shed light on the difficulties of comparing optimizers for10 neural network training more generally.11 1 Introduction12 In recent years, hardware systems employing GPUs and TPUs have enabled neural network training13 programs to process dramatically more data in parallel than ever before. The most popular way to14 exploit these systems is to increase the batch size in the optimization algorithm (i.e. the number15 of training examples processed per training step). On many workloads, modern systems can scale16 to larger batch sizes without significantly increasing the time per step [Jouppi et al., 2017, Wang17 et al., 2019], thus proportionally increasing the number of training examples processed per second.18 If researchers can use this increased throughput to reduce the time required to train each neural19 network, then they should achieve better results by training larger models, using larger datasets, and20 by exploring new ideas more rapidly.21 As the capacity for data parallelism continues to increase, practitioners can take their existing,22 well-tuned training configurations and re-train with larger batch sizes, hoping to achieve the same23 performance in less training time [e.g. Ying et al., 2018]. On an idealized data-parallel system with24 negligible overhead from increasing the batch size, they might hope to achieve perfect scaling, a25 proportional reduction in training time as the batch size increases.26 However, achieving perfect scaling is not always straightforward. Changing the batch size changes27 the training dynamics, requiring the training hyperparameters (e.g. learning rate) to be carefully28 re-tuned in order to maintain the same level of validation performance.1 In addition, smaller batch29 sizes provide implicit regularization from gradient noise that may need to be replaced by other forms30 of regularization when the batch size is increased. Finally, even with perfect tuning, increasing31 1 Although there are heuristics for adjusting the learning rate as the batch size changes, these heuristics inevitably break down sufficiently far from the initial batch size and it is also not clear how to apply them to other training hyperparameters (e.g. momentum). Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. the batch size eventually produces diminishing returns. After a critical batch size, the number of32 training steps cannot be decreased in proportion to the batch size – the number of epochs must33 increase to match the validation performance of the smaller batch size. See Shallue et al. 2019 for a34 survey of the effects of data parallelism on neural network training. Once these effects are taken into35 account, there is no strong evidence that increasing the batch size degrades the maximum achievable36 performance on any workload. At the same time, the ever-increasing capacity for data parallelism37 presents opportunities for new regularization techniques that can replace the gradient noise of smaller38 batch sizes and new optimization algorithms that can extend perfect scaling to larger batch sizes by39 using more sophisticated gradient information [Zhang et al., 2019].40 You et al. [2017] proposed the LARS optimization algorithm in the hope of speeding up neural41 network training by exploiting larger batch sizes. LARS is a variant of stochastic gradient descent42 (SGD) with momentum [Polyak, 1964] that applies layer-wise normalization before applying each43 gradient update. Although it is difficult to draw strong conclusions from the results presented in the44 LARS paper, 2 the MLPerf3 Training benchmark4 adopted LARS as one of two allowed algorithms45 in the closed division for ResNet-50 on ImageNet and it became the de facto standard algorithm for46 that benchmark task. With MLPerf entrants competing to find the fastest-training hyperparameters47 for LARS, the first place submissions in the two most recent MLPerf Training competitions used48 LARS to achieve record training speeds with batch sizes of 32,678 and 65,536, respectively. No49 publications or competitive submissions to MLPerf have attempted to match these results with a50 standard optimizer (e.g. Momentum or Adam). However, MLPerf entrants do not have a strong51 incentive (nor are necessarily permitted by the rules) to explore other algorithms because MLPerf52 Training is a systems benchmark that requires algorithmic equivalence between submissions to make53 fair comparisons. Moreover, since the main justification for LARS is its excellent performance on54 ResNet-50 at large batch sizes, more work is needed to quantify any benefit of LARS over standard55 algorithms at any batch size.56 You et al. [2019] later proposed the LAMB optimizer to speed up pre-training for BERT [Devlin57 et al., 2018] using larger batch sizes after concluding that LARS was not effective across workloads.58 LAMB is a variant of Adam [Kingma and Ba, 2014] that adds a similar layer-wise normalization step59 to LARS. You et al. [2019] used LAMB for BERT pre-training with batch sizes up to 65,536 and60 claimed that Adam cannot match the performance of LAMB beyond batch size 16,384.61 In this paper, we demonstrate that standard optimizers, without any layer-wise normalization tech-62 niques, can match or improve upon the large batch size results used to justify LARS and LAMB. In63 Section 2, we show that Nesterov momentum [Nesterov, 1983] matches the performance of LARS on64 the ResNet-50 benchmark with batch size 32,768. We are the first to match this result with a standard65 optimizer. In Section 3, contradicting the claims in You et al. [2019], we show that Adam obtains66 better BERT pre-training results than LAMB at the largest batch sizes, resulting in better downstream67 performance metrics after fine-tuning.68 In addition, we establish a new state-of-the-art for BERT pretraining speed, reaching an F1 score of69 90.46 in 7,818 steps using Adam at batch size 65,536 (we report training speed in steps because our70 focus is algorithmic efficiency, but since we compare LARS and LAMB to simpler optimizers, fewer71 training steps corresponds to faster wall-time in an optimized implementation – our BERT result72 with Adam also improves upon the wall-time record of LAMB reported in You et al. 2019). Taken73 together, our results establish stronger training speed baselines for these tasks and batch sizes, which74 we hope will assist future work aiming to accelerate training using larger batch sizes.75 In addition to the contributions mentioned above, we demonstrate several key effects that are often76 overlooked by studies aiming to establish the superiority of new optimization algorithms. We show77 that future work must carefully disentangle regularization and optimization effects when comparing a78 new optimizer to baselines. We also report several under-documented details used to generate the79 best LARS and LAMB results, a reminder that future comparisons should document any novel tricks80 and include them in baselines. Finally, our results add to existing evidence in the literature on the81 difficulty of performing independently rigorous hyperparameter tuning for optimizers and baselines.82 2 The modified AlexNet on ImageNet benchmark did not have well-established accuracy targets from prior work and LARS used a more general learning rate schedule than the momentum baseline. For ResNet-50 on ImageNet, LARS achieved sub-par accuracy numbers and was not compared to any other optimizer at the same batch size, leaving open the possibility that a generic optimizer would scale just as well as LARS. 3 MLPerf is a trademark of MLCommons.org. 4 https://mlperf.org/training-overview In particular, we show that the optimal shape of the learning rate schedule is optimizer-dependent (in83 addition to the scale), and that differences in the schedule can dominate optimizer comparisons at84 smaller step budgets and become less important at larger step budgets.85 1.1 Related work86 Shallue et al. [2019] and Zhang et al. [2019] explored the effects of data parallelism on neural network87 training for different optimizers, finding no evidence that larger batch sizes degrade performance88 and demonstrating that different optimizers can achieve perfect scaling up to different critical batch89 sizes. You et al. [2017, 2019] developed the LARS and LAMB optimizers in the hope of speeding up90 training by achieving perfect scaling beyond standard optimizers. Many other recent papers have91 proposed new optimization algorithms for generic batch sizes or larger batch sizes [see Schmidt92 et al., 2020]. Choi et al. [2019] and Schmidt et al. [2020] demonstrated the difficulties with fairly93 comparing optimizers, showing that the hyperparameter tuning protocol is a key determinant of94 optimizer rankings. The MLPerf Training benchmark [Mattson et al., 2019] provides a competitive95 ranking of neural network training systems, but does not shed much light on the relative performance96 of optimizers because entrants are limited in the algorithms they can use and the hyperparameters97 they can tune.98 2 Matching LARS on ImageNet99 The MLPerf training benchmark for ResNet-50 v1.5 on ImageNet [Mattson et al., 2019] aims to100 reach 75.9% validation accuracy in the shortest possible wall-clock time. In the closed division of101 the competition, entrants must choose between two optimizers, SGD with momentum or LARS, and102 are only allowed to tune a specified subset of the optimization hyperparameters, with the remaining103 hyperparameter values set by the competition rules.5 The winning entries in the two most recent104 competitions used LARS with batch size 32,768 for 72 training epochs6 and LARS with batch size105 65,536 for 88 training epochs,7 respectively. Kumar et al. [2019] later improved the training time106 for batch size 32,768 by reaching the target accuracy in 64 epochs. These are currently the fastest107 published results on the ResNet-50 benchmark. However, it has been unclear whether LARS was108 necessary to achieve these training speeds since no recent published results or competitive MLPerf109 submissions have used another optimizer. In this section, we describe how we matched the 64 epoch,110 32,768 batch size result of LARS using standard Nesterov momentum.8111 A fair benchmark of training algorithms or hardware systems must account for stochasticity in112 individual training runs. In the MLPerf competition, the benchmark metric is the mean wall-clock113 time of 5 trials after the fastest and slowest trials are excluded. Only 4 out of the 5 trials need to reach114 the target accuracy and there is no explicit limit on the number of times an entrant can try a different115 set of 5 trials. Since our goal is to compare algorithms, rather than systems, we aim to match the116 LARS result in terms of training steps instead (but since Nesterov momentum is computationally117 simpler than LARS, this would also correspond to faster wall-clock time on an optimized system).118 Specifically, we measure the median validation accuracy over 50 training runs with a fixed budget of119 2,512 training steps9 at a batch size of 32,768. When we ran the published LARS training pipeline,10120 LARS achieved a median accuracy of 75.97% and reached the target in 35 out of 50 trials. We121 consider the LARS result to be matched by another optimizer if the median over 50 trials exceeds the122 target of 75.9%.123 2.1 Nesterov momentum at batch size 32k124 This section describes how we used the standard Nesterov momentum optimizer to train the ResNet-125 50 v1.5 on ImageNet to 75.9% validation accuracy in 2,512 update steps at a batch size of 32,768,126 matching the best published LARS result at this batch size. Although we implemented our own127 training program, the only logical changes we made to the published LARS pipeline were to the128 optimizer and the optimization hyperparameters. Our model implementation and data pre-processing129 pipeline were identical to those required under the MLPerf closed division rules (see Appendix B).130 5 https://git.io/JtknD 6 https://mlperf.org/training-results-0-6 7 https://mlperf.org/training-results-0-7 8 The 88 epoch, 65,536 batch size result is faster in terms of wall-clock time but requires more training epochs, indicating that it is beyond LARS’s perfect scaling regime. Although LARS obtains diminishing returns when increasing the batch size from 32,768 to 65,536, future work could investigate whether Nesterov momentum drops off more or less rapidly than LARS. 9 Corresponding to 64 training epochs in Kumar et al. [2019]. 10 https://git.io/JtsLQ We present two Nesterov momentum hyperparameter configurations that achieve comparable per-131 formance to LARS. Configuration A achieved a median accuracy of 75.97% (the same as LARS)132 and reached the target accuracy in 34 out of 50 trials. Configuration B is a modified version of133 Configuration A designed to make as few changes as possible to the LARS hyperparameters; it134 achieved a median accuracy of 75.92% and reached the target in 29 out of 50 trials. See Appendix D.1135 for the complete hyperparameter configurations.136 To achieve these results, we tuned the hyperparameters of the training pipeline from scratch using137 Nesterov momentum. We ran a series of experiments, each of which searched over a hand-designed138 hyperparameter search space using quasi-random search [Bousquet et al., 2017]. Between each139 experiment, we modified the previous search space and/or tweaked the training program to include140 optimization tricks and non-default hyperparameter values we discovered in the state-of-the-art LARS141 pipeline. The full sequence of experiments we ran, including the number of trials, hyperparameters142 tuned, and search space ranges, are provided in Appendix D.4. Once we had matched the LARS143 result with Configuration A, we tried setting each hyperparameter to its value in the LARS pipeline in144 order to find the minimal set of changes that still achieved the target result, producing Configuration145 B. The remainder of this section describes the hyperparameters we tuned and the techniques we146 applied on the journey to these results.147 2.1.1 Nesterov Momentum Optimizer148 Nesterov momentum is a variant of classical or “heavy-ball” momentum defined by the update rule149 vt+1 = µvt +∇`(θt), θt+1 = θt − ηt (µvt+1 +∇`(θt)) , where v0 = 0, θt is the vector of model parameters after t steps, ∇`(θt) is the gradient of the loss150 function `(θ) averaged over a batch of training examples, µ is the momentum, and ηt is the learning151 rate for step t. We prefer Nesterov momentum over classical momentum because it tolerates larger152 values of its momentum parameter [Sutskever et al., 2013] and sometimes outperforms classical153 momentum, although the two algorithms perform similarly on many tasks [Shallue et al., 2019, Choi154 et al., 2019]. We tuned the Nesterov momentum µ in Configurations A and B. We discuss the learning155 rate schedule {ηt} separately in Section 2.1.4.156 2.1.2 Batch normalization157 The ResNet-50 v1.5 model uses batch normalization [Ioffe and Szegedy, 2015], defined as158 BN(x(l)) = ( x(l) − mean(x(l))√ var(x(l)) + ) × γ(l) + β(l), where x(l) is a vector of pre-normalization outputs from layer l, mean(·) and var(·) denote the159 element-wise sample mean and variance across the batch of training examples,11 and γ(l) and β(l)160 are trainable model parameters.161 Batch normalization introduces the following tuneable hyperparameters: , the small constant added162 to the sample variance; the initial values of γ(l) and β(l); and ρ, which governs the exponential163 moving averages of the scaling factors used in evaluation. The LARS pipeline uses = 10−5 and164 ρ = 0.9. It sets the initial value of β(l) to 0.0 everywhere, but the initial value of γ(l) depends on165 the layer: it sets γ(l) to 0.0 in the final batch normalization layer of each residual block, and to 1.0166 everywhere else. In Configuration A, we tuned , ρ, and γ0, the initial value of γ(l) in the final batch167 normalization layer of each residual block. In Configuration B, we used the same values as LARS for168 and ρ, but we found that choosing γ0 between 0.0 and 1.0 was important for matching the LARS169 result with Nesterov momentum.170 2.1.3 Regularization171 In Configuration A, we tuned both the L2 regularization coefficient λ and label smoothing172 coefficient τ [Szegedy et al., 2016]. The LARS pipeline uses λ = 10−4 and τ = 0.1.173 11 In a distributed training environment the mean and variance are commonly computed over a subset of the full batch. The LARS pipeline uses a “virtual batch size” of 64, which we also use to avoid changing the training objective [Hoffer et al., 2017]. Crucially, the LARS pipeline does not apply L2 regularization to the bias variables of the174 ResNet model nor the batch normalization parameters γ(l) and β(l) (indeed, the published175 LARS pipeline does not even apply LARS to these parameters – it uses Heavy-ball momen-176 tum). This detail is extremely important for both LARS and Nesterov momentum to achieve177 the fastest training speed. Configuration B used the same λ and τ as Configuration A.178 179 2.1.4 Learning rate schedule180 The LARS pipeline uses a piecewise polynomial schedule181 ηt = ηinit + (ηpeak − ηinit) ( t twarmup )pwarmup , t ≤ twarmup ηfinal + (ηpeak − ηfinal) ( T−t T−twarmup )pdecay t > twarmup, with ηinit = 0.0, ηpeak = 29.0, ηfinal = 10−4, pwarmup = 1,182 pdecay = 2, and twarmup = 706 steps. In Configuration A, we re-183 tuned all of these hyperparameters with Nesterov momentum.184 In Configuration B, we set ηinit, pdecay, and twarmup to the same185 values as LARS, changing only pwarmup from 1 to 2 and re-186 scaling ηpeak and ηfinal.187 2.1.5 Comparing Nesterov momentum and LARS188 Table 1 shows the hyperparameter values for Configuration B that differ from the state-189 of-the-art LARS pipeline. Aside from re-tuning the momentum, learning rate scale, and190 regularization hyperparameters (whose optimal values are all expected to change with the191 optimizer), the only changes are setting pwarmup to 2 instead of 1 and re-tuning γ0.192 193 Figure 1 shows the LARS learning rate schedule com-194 pared to the Nesterov momentum schedule. Even though195 these schedules are similar, we found that each optimizer196 had a different optimal value of the warmup polynomial197 power. As Table 2 shows, Nesterov momentum performs198 better with pwarmup = 2 instead of 1, while the opposite199 is true with LARS. As discussed in Agarwal et al. [2020],200 optimizers can induce implicit step size schedules that201 strongly influence their training dynamics and solution202 quality, and it appears from Table 2 that the implicit step203 sizes of Nesterov momentum and LARS may evolve dif-204 ferently, causing the shapes of their optimal learning rate205 schedules to differ.206 Although the main concern of a practitioner is validation performance, the primary task of an207 optimization algorithm is to minimize training loss. Table 2 shows that Nesterov momentum achieves208 higher training accuracy than LARS, despite similar validation performance. Thus, it may be more209 appropriate to consider the layerwise normalization of LARS to be a regularization technique, rather210 than an optimization technique.211 Spending even more effort tuning LARS or Nesterov momentum would likely further improve the212 current state-of-the-art for that optimizer. Meaningful optimizer comparisons are only possible213 with independent and equally intensive tuning efforts, and we do not claim that either optimizer214 outperforms the other on this benchmark. That said, if the main evidence for LARS’s utility as a215 “large-batch optimizer” is its performance on this particular benchmark, then more evidence is needed216 to quantify any benefit it has over traditional, generic optimizers like Nesterov momentum.217 2.2 Lessons learned218 In hindsight, it was only necessary to make a few changes to the LARS pipeline to match its219 performance at batch size 32,768 with Nesterov momentum. However, Table 1 does not accurately220 represent the effort required when attempting to match a highly tuned training-speed benchmark.221 Firstly, as described in Sections 2.1.2 and 2.1.3, the strong results of LARS depend partly on a few222 subtle optimization tricks and non-default values of uncommonly-tuned hyperparameters. Fortunately,223 in this case we could discover these tricks by examining the open-source code required for MLPerf224 submissions, but machine learning research papers do not always report these important details.225 Researchers can easily waste a lot of experiments and produce misleading results before getting all of226 these details right. We demonstrate the importance of adding these tricks to our Nesterov momentum227 pipeline in Appendix C; without these tricks (or some new tricks), we likely would not have been228 able to match the LARS performance.229 Secondly, the learning rate schedule really matters when trying to maximize performance with a230 relatively small step budget. Both LARS and Nesterov momentum are sensitive to small deviations231 from the optimized learning rate schedules in Figure 1, and neither schedule works as well for the232 other optimizer. Although relatively minor changes were sufficient to match LARS with Nesterov233 momentum, there is no way to know a priori how the optimal schedule will look for a new optimizer234 Wu et al. [2018]. Even in toy settings where the optimal learning rate schedule can be derived, it235 does not fit into commonly used schedule families and depends strongly on the optimizer Zhang236 et al. [2019]. Indeed, this problem applies to the other optimization hyperparameters as well: it237 is extremely difficult to know which are worth considering ahead of time. Finally, even when we238 narrowed down our hyperparemeter search spaces around the optimal point, the volume of our search239 spaces corresponding to near-peak performance was small, likely due to the small step budget [Shallue240 et al., 2019]. We investigate how these effects change with a less stringent step budget in Section 4.241 3 Stronger BERT pretraining speed baselines242 You et al. [2019] developed the LAMB optimizer in the hope of speeding up training for BERT-Large243 [Bidirectional Encoder Representations from Transformers, Devlin et al., 2018]. BERT training244 consists of two phases. The “pretraining” phase has two objectives: (1) predicting masked tokens245 based on the rest of the sequence (a masked language model), and (2) predicting whether two246 given sentences follow one from another. Finally, the “fine-tuning” phase refines the model for a247 downstream task of interest. BERT pretraining takes a considerable amount of time (up to 3 days on248 16 Cloud TPU-v3 chips Jouppi et al. [2017]), whereas the fine-tuning phase is typically much faster.249 Model quality is typically assessed on the downstream metrics, not on pretraining loss, making BERT250 training a somewhat awkward benchmark for optimization research.251 You et al. [2019] used LAMB for BERT pretraining with batch sizes up to 65,536 and claimed that252 LAMB outperforms Adam batch size 16,384 and beyond. The LAMB optimizer has since appeared253 in several NLP toolkits, including as Microsoft DeepSpeed and NVIDIA Multi-node BERT training,254 and as a benchmark task in MLPerf v0.7.12255 As shown in Table 3, we trained Adam (with decoupled weight decay) baselines that achieve better256 results than both the LAMB and Adam results reported in You et al. [2019]. Our new Adam257 baselines obtain better F1 scores on the development set of the SQuaD v1.1 task in the same number258 of training steps as LAMB for both batch size 32,768 and the hybrid 65,536-then-32,768 batch259 size training regime in You et al. [2019]. We also ran Adam at batch size 65,536 to reach nearly260 the same F1 score as the hybrid batch size LAMB result, but in much fewer training steps. We261 believe 7,818 steps is a new state-of-the-art for BERT pretraining speed [in our experiments, it262 also improves upon the 76-minute record claimed in You et al., 2019]. Additionally, at batch263 size 32,768 our Adam baseline got a better pretraining loss of 1.277 compared to LAMB’s 1.342.264 12 We do not consider the MLPerf task in this paper since it is a warm-start, partial training task. 265 We used the same experimental setup as You266 et al. [2019], including two pretraining phases267 with max sequence lengths of 128 and then 512.268 In order to match You et al. [2019], we reported269 the F1 score on the downstream SQuaD v1.1270 task as the target metric, although this metric271 introduces potential confounds: optimization272 efficiency should be measured on the training273 task using training and held-out data sets. Fortunately, in this case better pretraining performance274 correlated a with higher F1 score after fine-tuning. See Appendix B.2 for additional experiment275 details. We tuned Adam hyperparameters independently for each pretraining phase, specifically276 learning rate η, β1, β2, the polynomial power for the learning rate warmup pwarmup, and weight277 decay λ, using quasi-random search [Bousquet et al., 2017]. See Appendix D.2 for the search spaces.278 In addition to hyperparmeter tuning, our improved Adam results at these batch sizes are also likely279 due to two implementation differences. First, the Adam implementation in You et al. [2019] comes280 from the BERT open source code base, in which Adam is missing the standard bias correction.13281 The Adam bias correction acts as an additional step size warm-up, thereby potentially improving the282 stability in the initial steps of training. Second, the BERT learning rate schedule had a discontinuity283 at the start of the decay phase due to the learning rate decay being incorrectly applied during warm-up284 14 (see Figure 2 in Appendix B). This peculiarity is part of the official BERT release and is present in285 3000+ copies of the BERT Training code on GitHub.286 4 Investigating a less stringent step budget287 Part of what makes comparing optimizers so difficult is that the hyperparameter tuning tends to288 dominate the comparisons [Choi et al., 2019]. Moreover, tuning becomes especially difficult when289 we demand a fixed epoch budget even when dramatically increasing the batch size [Shallue et al.,290 2019]. Fixing the epoch budget as the batch size increases is equivalent to demanding perfect scaling291 (i.e. that the number of training steps decreases by the same factor that the batch size is increased).292 We can view the role of hyperparameter tuning for large batch training as resisting the inevitable end293 of perfect scaling. For example, it might be possible to extend perfect scaling using delicately tuned294 learning rate schedules, but comparing optimizers under these conditions can make the learning rate295 schedule dominate the comparison by favoring some algorithms over others. Therefore, in order to296 better understand the behavior of LARS and LAMB compared to Nesterov Momentum and Adam, we297 ran additional ResNet-50 experiments with a more generous 6,000 step budget (vs 2,512 in Section 2)298 and a more simplistic cosine learning rate schedule. At batch size 32,768, this budget should let us299 reach better validation accuracy than the MLPerf target of 75.9%.300 Although not mentioned in You et al. [2017], the state-of-the-art MLPerf pipeline for “LARS” actually301 uses both LARS and Heavy-ball Momentum, with Momentum applied to the batch normalization and302 ResNet bias parameters and LARS applied to the other parameters. You et al. [2019] does not mention303 whether LAMB was only applied to some parameters and not others. If layerwise normalization can304 be harmful for some model parameters, this is critical information for practitioners using LARS or305 LAMB, since it might not be obvious which optimizer to apply to which parameters. To investigate306 this, we trained both pure LARS and LAMB configurations, as well as configurations that did not307 apply layerwise normalization to the batch normalization and ResNet bias parameters. Moreover,308 LAMB’s underlying Adam implementation defaults to = 10−6, rather than the typical 10−7 or309 10−8. In some cases, can be a critical hyperparameter for Adam [Choi et al., 2019], so we included310 Adam configurations with both = 10−6 and = 10−8.311 Table 4 shows the validation accuracy of these different configurations after training for 6,000312 steps with batch size 32,768. In every case, we used a simple cosine decay learning rate sched-313 ule and tuned the initial learning rate and weight decay using quasi-random search. We used314 momentum parameters of 0.98 for Nesterov momentum and 0.929 for LARS, respectively, based315 on the tuned values from Section 2. We used default hyperparameters for Adam and LAMB316 except where specified. We set all other hyperparameters to the same values as the state-of-the-317 art LARS pipeline, except we set γ0 = 1.0. See Appendix D.3 for more details. As expected,318 13 https://git.io/JtY8d 14 See https://git.io/JtnQW and https://git.io/JtnQ8. highly tuned learning rate schedules and optimizer hyperparameters are no longer necessary with319 a less stringent step budget. Multiple optimizer configurations in Table 4 exceed the MLPerf320 target accuracy of 75.9% at batch size 32,768 with minimal tuning. Training with larger batch321 sizes is not fundamentally unstable: stringent step budgets make hyperparameter tuning trickier.322 LAMB, are introduced alongside claims that337 the new optimizer does not require any—or at338 least minimal—tuning. Unfortunately, these339 claims require a lot of work to support, since340 they require trying the optimizer on new prob-341 lems without using those problems during the342 development of the algorithm. Although our ex-343 periments here are not sufficient to determine344 which optimizers are easiest to tune, experiments like these that operate outside the regime of highly345 tuned learning rate schedules can serve as a starting point. In this experiment, LARS and LAMB do346 not appear to have an advantage in how easy they are to tune even on a dataset and model that were347 used in the development of both of those algorithms. LAMB is a variant of Adam and performs about348 the same as Adam with the same value of ; LARS is more analogous to Momentum and indeed349 Nesterov momentum and LARS have similar performance.350 5 Discussion351 Our results show that standard, generic optimizers suffice for achieving strong results across batch352 sizes. Therefore, any research program to create new optimizers for training at larger batch sizes353 must start from the fact that Momentum, Adam, and likely other standard methods work fine at batch354 sizes as large as those considered in this paper. The LARS and LAMB update rules have no more355 to do with the batch size (or “large” batches) than the Momentum or Adam update rules. Although356 You et al. [2019] presented convergence rate bounds for LARS and LAMB to support their claims357 of superior performance, we show in Appendix A that Adam satisfies a similar bound to LAMB.358 These bounds all rely on very unrealistic assumptions.15 Most of all, they are loose upper bounds359 on the worst case behavior of the algorithms, not accurate reflections of optimizer performance in360 reality. Whether layer-wise normalization can be useful for optimization or regularization remains an361 open question. However, if LARS and LAMB have any advantage over standard techniques, it is not362 that they work dramatically better on the tasks and batch sizes in You et al. [2017, 2019]. This is363 not to suggest that there is nothing interesting about studying neural network optimization at larger364 batch sizes. For example, as gradient noise decreases, there may be opportunities to harness curvature365 information and extend the region of perfect scaling [Zhang et al., 2019]. However, there is currently366 no evidence that LARS and LAMB scale better than Momentum and Adam.367 Our primary concern in this paper has been matching the state of the art—and establishing new368 baselines—for training speed measurements of the sort used to justify new techniques and algorithms369 for training with larger batch sizes. In contrast, many practitioners are more concerned with obtaining370 the best possible validation error with a somewhat flexible training time budget. Part of the reason371 why matching LARS at batch size 32,768 was non-trivial is because getting state of the art training372 15 All convergence bounds assume no momentum is used, and the Lavg bound for LAMB also assumes β2 = 0, when it is typically 0.999. Additionally, Lavg could still be large if L∞ is large, but we leave an empirical analysis of this to future work. speed requires several tricks and implementation details that are not often discussed. It was not373 obvious to us a priori which ones would prove crucial. These details do not involve changes to the374 optimizer, but they interact with the optimizer in a regime where all hyperparameters need to be well375 tuned to stay competitive, making it necessary to re-tune everything for a new optimizer.376 In neural network optimization research, training loss is rarely discussed in detail and evaluation377 centers on validation/test performance since that is what practitioners care most about. However,378 although we shouldn’t only consider training loss, it is counter-intuitive and counter-productive to379 elide a careful investigation of the actual objective of the optimizer. If a new optimizer achieves better380 test performance, but shows no speedup on training loss, then perhaps it is not a better optimizer so381 much as an indirect regularizer. 16 Indeed, in our experiments we found that Nesterov momentum382 achieves noticeably better training accuracy on ResNet-50 than the LARS configuration we used,383 despite reaching roughly the same validation accuracy. Properly disentangling possible regularization384 benefits from optimization speed-ups is crucial if we are to understand neural network training,385 especially at larger batch sizes where we lose some of the regularization effect of gradient noise.386 Hypothetically, if the primary benefit of a training procedure is regularization, then it would be better387 to compare the method with other regularization baselines than other optimizers.388 Ultimately, we only care about batch size to the extent that higher degrees of data parallelism lead389 to faster training. Training with a larger batch size is a means, not the end goal. New optimizers—390 whether designed for generic batch sizes or larger batch sizes—have the potential to dramatically391 improve algorithmic efficiency across multiple workloads, but our results show that standard opti-392 mizers can match the performance of newer alternatives on the workloads we considered. Indeed,393 despite the legion of new update rule variants being proposed in the literature, standard Adam and394 Momentum remain the workhorses of practitioners and researchers alike, while independent empirical395 comparisons consistently find no clear winner when optimizers are compared across a variety of396 workloads [Schmidt et al., 2020]. Meanwhile, as Choi et al. [2019] and our results underscore,397 comparisons between optimizers crucially depend on the effort spent tuning hyperparameters for each398 optimizer. Given these facts, we should regard with extreme caution studies claiming to show the399 superiority of one particular optimizer over others. Part of the issue stems from current incentives in400 the research community; we overvalue the novelty of new methods and undervalue establishing strong401 baselines to measure progress against. This is particularly problematic in the study of optimizers,402 where the learning rate schedule is arguably more important than the choice of the optimizer update403 rule itself! As our results show, the best learning rate schedule is tightly coupled with the optimizer,404 meaning that tuning the learning rate schedule for a new optimizer will generally favor the new405 optimizer over a baseline unless the schedule of the baseline is afforded the same tuning effort.406 6 Conclusion407 In this work, we demonstrated that standard optimizers, without any layer-wise normalization408 techniques, can match or exceed the large batch size results used to justify LARS and LAMB. Future409 work attempting to argue that a new algorithm is useful by comparing to baseline methods or results,410 including those established in this paper, faces a key challenge in showing that the gains are due to the411 new method and not merely due to better tuning or changes to the training pipeline (e.g. regularization412 tricks). Although gains from tuning will eventually saturate, we can, in principle, always invest more413 effort in tuning and potentially get better results for any optimizer. However, our goal should be414 developing optimizers that work better across many different workloads when taking into account the415 amount of additional tuning they require.416 Moving forward, if we are to reliably make progress we need to rethink how we compare and evaluate417 new optimizers for neural network training. Given how sensitive optimizer performance is to the418 hyperparameter tuning protocol and how difficult it is to quantify hyperparameter tuning effort, we419 can’t expect experiments with self-reported baselines to always lead to fair comparisons. Ideally, new420 training methods would be evaluated in a standardized competitive benchmark, where submitters of421 new optimizers do not have full knowledge of the evaluation workloads. Some efforts in this direction422 have started, for instance the MLCommons Algorithmic Efficiency Working Group17, but more work423 needs to be done to produce incentives for the community to publish well-tuned baselines and to424 reward researchers that conduct the most rigorous empirical comparisons.425 16 Deep learning folk wisdom is that “any method to make training less effective can serve as a regularizer,” whether it is a bug in gradients or a clever algorithm. 17 https://mlcommons.org/en/groups/research-algorithms/ Checklist426 1. For all authors...427 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s428 contributions and scope? [Yes] See Sections 2, 3, 4429 (b) Did you describe the limitations of your work? [Yes] We had a lengthy discussion of430 the limitations and scope of the work in Section 5431 (c) Did you discuss any potential negative societal impacts of your work? [No] We did432 not discuss this in the main text. Our primary contribution is to improve experimental433 protocols for other methodological work, which is so removed from specific machine434 learning applications that it is hard to determine the net impact. That said, more435 effective experimental protocols should lead to more effective science which in turn436 should lead to more effective machine learning applications. Whether this development437 is positive or negative for society will depend on who stands to gain from the use of438 machine learning in future applied contexts. Additionally, although our work should, in439 the long run, save computational resources for individual researchers, in net across the440 community this may or may not produce an aggregate savings because more efficient441 machine learning training, by making larger scale projects more accessible, can lead442 to an increased demand for compute resources [York, 2006], which can have varying443 degrees of negative environmental impacts [Patterson et al., 2021].444 (d) Have you read the ethics review guidelines and ensured that your paper conforms to445 them? [Yes]446 2. If you are including theoretical results...447 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Appendix A448 for a comprehensive description of the problem setting.449 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix A.450 3. If you ran experiments...451 (a) Did you include the code, data, and instructions needed to reproduce the main experi-452 mental results (either in the supplemental material or as a URL)? [No] We will include453 a link to all code and all possible reproducibility instructions after the anonymized454 reviewing period is over.455 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they456 were chosen)? [Yes] We are extremely detailed about our tuning procedures and dataset457 details, see Appendices B, D.458 (c) Did you report error bars (e.g., with respect to the random seed after running experi-459 ments multiple times)? [Yes] While we do not report error bars in the tables in the main460 text, Appendices B.2, C contains box plots showing the quartiles of the distribution461 over random seeds.462 (d) Did you include the total amount of compute and the type of resources used (e.g., type463 of GPUs, internal cluster, or cloud provider)? [No] In Appendix B we state that we464 run on Google TPUs, however we do not tally up the total number of experiments run465 (although an interested reader could compute it from the information we provided in466 our detailed appendices given that we list all intermediate experiments, no matter how467 silly in hindsight).468 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...469 (a) If your work uses existing assets, did you cite the creators? [Yes] We reference the470 relevant citations for all models, datasets, and techniques.471 (b) Did you mention the license of the assets? [No]472 (c) Did you include any new assets either in the supplemental material or as a URL? [No]473 (d) Did you discuss whether and how consent was obtained from people whose data you’re474 using/curating? [N/A]475 (e) Did you discuss whether the data you are using/curating contains personally identifiable476 information or offensive content? [N/A]477 5. If you used crowdsourcing or conducted research with human subjects...478 (a) Did you include the full text of instructions given to participants and screenshots, if479 applicable? [N/A]480 (b) Did you describe any potential participant risks, with links to Institutional Review481 Board (IRB) approvals, if applicable? [N/A]482 (c) Did you include the estimated hourly wage paid to participants and the total amount483 spent on participant compensation? [N/A]484 References485 Martı́n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.486 Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew487 Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath488 Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,489 Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent490 Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg,491 Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on492 heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from493 tensorflow.org.494 Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, and Cyril Zhang. Disentangling adaptive495 gradient methods from learning rates. arXiv preprint arXiv:2002.11803, 2020.496 Olivier Bousquet, Sylvain Gelly, Karol Kurach, Olivier Teytaud, and Damien Vincent. Critical hyper-497 parameters: No random, no cry. arXiv, 2017. URL https://arxiv.org/abs/1706.03200.498 James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal499 Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and500 Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL501 http://github.com/google/jax.502 Dami Choi, Christopher J Shallue, Zachary Nado, Jaehoon Lee, Chris J Maddison, and George E503 Dahl. On empirical comparisons of optimizers for deep learning. arXiv preprint arXiv:1910.05446,504 2019.505 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep506 bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.507 Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the gen-508 eralization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741,509 2017.510 Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by511 reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.512 Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa,513 Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of514 a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer515 Architecture, pages 1–12, 2017.516 Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint517 arXiv:1412.6980, 2014.518 Sameer Kumar, Victor Bitorff, Dehao Chen, Chiachen Chou, Blake Hechtman, HyoukJoong Lee,519 Naveen Kumar, Peter Mattson, Shibo Wang, Tao Wang, et al. Scale mlperf-0.6 models on google520 tpu-v3 pods. arXiv preprint arXiv:1909.09756, 2019.521 Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson,522 Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debojy-523 oti Dutta, Udit Gupta, Kim Hazelwood, Andrew Hock, Xinyuan Huang, Atsushi Ike, Bill Jia,524 Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Guokai Ma, Deepak Narayanan, Tayo525 Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St.526 John, Tsuguchika Tabaru, Carole-Jean Wu, Lingjie Xu, Masafumi Yamazaki, Cliff Young, and527 Matei Zaharia. MLPerf training benchmark. arXiv preprint arXiv:1910.01500, 2019. URL528 https://arxiv.org/abs/1910.01500.529 Yurii E Nesterov. A method for solving the convex programming problem with convergence rate530 O(1/kˆ2). In Dokl. akad. nauk Sssr, volume 269, pages 543–547, 1983.531 David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild,532 David So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training. arXiv533 preprint arXiv:2104.10350, 2021.534 Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR535 Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.536 Robin M Schmidt, Frank Schneider, and Philipp Hennig. Descending through a crowded valley–537 benchmarking deep learning optimizers. arXiv preprint arXiv:2007.01547, 2020.538 Christopher J Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and539 George E Dahl. Measuring the effects of data parallelism on neural network training. Journal of540 Machine Learning Research, 20(112):1–49, 2019.541 Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization542 and momentum in deep learning. In ICML, 2013.543 Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking544 the inception architecture for computer vision. In Proceedings of the IEEE conference on computer545 vision and pattern recognition, pages 2818–2826, 2016.546 Yu Emma Wang, Gu-Yeon Wei, and David Brooks. Benchmarking tpu, gpu, and cpu platforms for547 deep learning. arXiv preprint arXiv:1907.10701, 2019.548 Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in549 stochastic meta-optimization. arXiv preprint arXiv:1803.02021, 2018.550 Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, and Youlong Cheng. Image classification at551 supercomputer scale. arXiv preprint arXiv:1811.06992, 2018.552 Richard York. Ecological paradoxes: William stanley jevons and the paperless office. Human Ecology553 Review, pages 143–147, 2006.554 Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv555 preprint arXiv:1708.03888, 2017.556 Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan557 Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep558 learning: Training bert in 76 minutes. In International Conference on Learning Representations,559 2019.560 Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George Dahl, Chris561 Shallue, and Roger B Grosse. Which algorithmic choices matter at which batch sizes? insights562 from a noisy quadratic model. In Advances in Neural Information Processing Systems, pages563 8196–8207, 2019.564 Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and565 Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching566 movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer567 Vision (ICCV), ICCV ’15, page 19–27, USA, 2015. IEEE Computer Society. ISBN 9781467383912.568 doi: 10.1109/ICCV.2015.11. URL https://doi.org/10.1109/ICCV.2015.11.569
1. What is the significance and relevance of the paper regarding its focus on examining LARS and LAMB optimizers? 2. What are the strengths of the paper regarding its quality, contributions, and arguments? 3. How does the reviewer assess the novelty and impact of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This is a very interesting work focused on a critical examination of LARS and LAMB optimizers, and the comparison of these with more established "standard" optimizers such as Adam. The authors' results and analysis are insightful and of wide interest to the deep learning community. Review FIrstly, the focus of this work is timely, relevant, and well motivated, making it significant and of interest to the deep learning community. The quality of writing is good, so quality standards are high in this respect too. The main contribution comes in the form of a series of insights pertaining to why LARS and LAMB perform well (or appear to), when they do not, and how they should be meaningfully compared with Adam and the like. The authors' arguments are convincing as is the evidence presented. Overall, this is a high quality paper which should be accepted. The key contributions are summarized well and succinctly by the authors themselves, so I quote: "standard optimizers, without any layer-wise normalization techniques, can match or exceed the large batch size results used to justify LARS and LAMB. Future work attempting to argue that a new algorithm is useful by comparing to baseline methods or results [...] faces a key challenge in showing that the gains are due to the new method and not merely due to better tuning or changes to the training pipeline"
NIPS
Title A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes Abstract Recently the LARS and LAMB optimizers have been proposed for training neural 1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and 3 have become popular in prominent benchmarks and deep learning libraries. How4 ever, without fair comparisons to standard optimizers, it remains an open question 5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In 6 this work we demonstrate that standard optimization algorithms such as Nesterov 7 momentum and Adam can match or exceed the results of LARS and LAMB at large 8 batch sizes. Our results establish new, stronger baselines for future comparisons 9 at these batch sizes and shed light on the difficulties of comparing optimizers for 10 neural network training more generally. 11 N/A Recently the LARS and LAMB optimizers have been proposed for training neural1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal-2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and3 have become popular in prominent benchmarks and deep learning libraries. How-4 ever, without fair comparisons to standard optimizers, it remains an open question5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In6 this work we demonstrate that standard optimization algorithms such as Nesterov7 momentum and Adam can match or exceed the results of LARS and LAMB at large8 batch sizes. Our results establish new, stronger baselines for future comparisons9 at these batch sizes and shed light on the difficulties of comparing optimizers for10 neural network training more generally.11 1 Introduction12 In recent years, hardware systems employing GPUs and TPUs have enabled neural network training13 programs to process dramatically more data in parallel than ever before. The most popular way to14 exploit these systems is to increase the batch size in the optimization algorithm (i.e. the number15 of training examples processed per training step). On many workloads, modern systems can scale16 to larger batch sizes without significantly increasing the time per step [Jouppi et al., 2017, Wang17 et al., 2019], thus proportionally increasing the number of training examples processed per second.18 If researchers can use this increased throughput to reduce the time required to train each neural19 network, then they should achieve better results by training larger models, using larger datasets, and20 by exploring new ideas more rapidly.21 As the capacity for data parallelism continues to increase, practitioners can take their existing,22 well-tuned training configurations and re-train with larger batch sizes, hoping to achieve the same23 performance in less training time [e.g. Ying et al., 2018]. On an idealized data-parallel system with24 negligible overhead from increasing the batch size, they might hope to achieve perfect scaling, a25 proportional reduction in training time as the batch size increases.26 However, achieving perfect scaling is not always straightforward. Changing the batch size changes27 the training dynamics, requiring the training hyperparameters (e.g. learning rate) to be carefully28 re-tuned in order to maintain the same level of validation performance.1 In addition, smaller batch29 sizes provide implicit regularization from gradient noise that may need to be replaced by other forms30 of regularization when the batch size is increased. Finally, even with perfect tuning, increasing31 1 Although there are heuristics for adjusting the learning rate as the batch size changes, these heuristics inevitably break down sufficiently far from the initial batch size and it is also not clear how to apply them to other training hyperparameters (e.g. momentum). Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. the batch size eventually produces diminishing returns. After a critical batch size, the number of32 training steps cannot be decreased in proportion to the batch size – the number of epochs must33 increase to match the validation performance of the smaller batch size. See Shallue et al. 2019 for a34 survey of the effects of data parallelism on neural network training. Once these effects are taken into35 account, there is no strong evidence that increasing the batch size degrades the maximum achievable36 performance on any workload. At the same time, the ever-increasing capacity for data parallelism37 presents opportunities for new regularization techniques that can replace the gradient noise of smaller38 batch sizes and new optimization algorithms that can extend perfect scaling to larger batch sizes by39 using more sophisticated gradient information [Zhang et al., 2019].40 You et al. [2017] proposed the LARS optimization algorithm in the hope of speeding up neural41 network training by exploiting larger batch sizes. LARS is a variant of stochastic gradient descent42 (SGD) with momentum [Polyak, 1964] that applies layer-wise normalization before applying each43 gradient update. Although it is difficult to draw strong conclusions from the results presented in the44 LARS paper, 2 the MLPerf3 Training benchmark4 adopted LARS as one of two allowed algorithms45 in the closed division for ResNet-50 on ImageNet and it became the de facto standard algorithm for46 that benchmark task. With MLPerf entrants competing to find the fastest-training hyperparameters47 for LARS, the first place submissions in the two most recent MLPerf Training competitions used48 LARS to achieve record training speeds with batch sizes of 32,678 and 65,536, respectively. No49 publications or competitive submissions to MLPerf have attempted to match these results with a50 standard optimizer (e.g. Momentum or Adam). However, MLPerf entrants do not have a strong51 incentive (nor are necessarily permitted by the rules) to explore other algorithms because MLPerf52 Training is a systems benchmark that requires algorithmic equivalence between submissions to make53 fair comparisons. Moreover, since the main justification for LARS is its excellent performance on54 ResNet-50 at large batch sizes, more work is needed to quantify any benefit of LARS over standard55 algorithms at any batch size.56 You et al. [2019] later proposed the LAMB optimizer to speed up pre-training for BERT [Devlin57 et al., 2018] using larger batch sizes after concluding that LARS was not effective across workloads.58 LAMB is a variant of Adam [Kingma and Ba, 2014] that adds a similar layer-wise normalization step59 to LARS. You et al. [2019] used LAMB for BERT pre-training with batch sizes up to 65,536 and60 claimed that Adam cannot match the performance of LAMB beyond batch size 16,384.61 In this paper, we demonstrate that standard optimizers, without any layer-wise normalization tech-62 niques, can match or improve upon the large batch size results used to justify LARS and LAMB. In63 Section 2, we show that Nesterov momentum [Nesterov, 1983] matches the performance of LARS on64 the ResNet-50 benchmark with batch size 32,768. We are the first to match this result with a standard65 optimizer. In Section 3, contradicting the claims in You et al. [2019], we show that Adam obtains66 better BERT pre-training results than LAMB at the largest batch sizes, resulting in better downstream67 performance metrics after fine-tuning.68 In addition, we establish a new state-of-the-art for BERT pretraining speed, reaching an F1 score of69 90.46 in 7,818 steps using Adam at batch size 65,536 (we report training speed in steps because our70 focus is algorithmic efficiency, but since we compare LARS and LAMB to simpler optimizers, fewer71 training steps corresponds to faster wall-time in an optimized implementation – our BERT result72 with Adam also improves upon the wall-time record of LAMB reported in You et al. 2019). Taken73 together, our results establish stronger training speed baselines for these tasks and batch sizes, which74 we hope will assist future work aiming to accelerate training using larger batch sizes.75 In addition to the contributions mentioned above, we demonstrate several key effects that are often76 overlooked by studies aiming to establish the superiority of new optimization algorithms. We show77 that future work must carefully disentangle regularization and optimization effects when comparing a78 new optimizer to baselines. We also report several under-documented details used to generate the79 best LARS and LAMB results, a reminder that future comparisons should document any novel tricks80 and include them in baselines. Finally, our results add to existing evidence in the literature on the81 difficulty of performing independently rigorous hyperparameter tuning for optimizers and baselines.82 2 The modified AlexNet on ImageNet benchmark did not have well-established accuracy targets from prior work and LARS used a more general learning rate schedule than the momentum baseline. For ResNet-50 on ImageNet, LARS achieved sub-par accuracy numbers and was not compared to any other optimizer at the same batch size, leaving open the possibility that a generic optimizer would scale just as well as LARS. 3 MLPerf is a trademark of MLCommons.org. 4 https://mlperf.org/training-overview In particular, we show that the optimal shape of the learning rate schedule is optimizer-dependent (in83 addition to the scale), and that differences in the schedule can dominate optimizer comparisons at84 smaller step budgets and become less important at larger step budgets.85 1.1 Related work86 Shallue et al. [2019] and Zhang et al. [2019] explored the effects of data parallelism on neural network87 training for different optimizers, finding no evidence that larger batch sizes degrade performance88 and demonstrating that different optimizers can achieve perfect scaling up to different critical batch89 sizes. You et al. [2017, 2019] developed the LARS and LAMB optimizers in the hope of speeding up90 training by achieving perfect scaling beyond standard optimizers. Many other recent papers have91 proposed new optimization algorithms for generic batch sizes or larger batch sizes [see Schmidt92 et al., 2020]. Choi et al. [2019] and Schmidt et al. [2020] demonstrated the difficulties with fairly93 comparing optimizers, showing that the hyperparameter tuning protocol is a key determinant of94 optimizer rankings. The MLPerf Training benchmark [Mattson et al., 2019] provides a competitive95 ranking of neural network training systems, but does not shed much light on the relative performance96 of optimizers because entrants are limited in the algorithms they can use and the hyperparameters97 they can tune.98 2 Matching LARS on ImageNet99 The MLPerf training benchmark for ResNet-50 v1.5 on ImageNet [Mattson et al., 2019] aims to100 reach 75.9% validation accuracy in the shortest possible wall-clock time. In the closed division of101 the competition, entrants must choose between two optimizers, SGD with momentum or LARS, and102 are only allowed to tune a specified subset of the optimization hyperparameters, with the remaining103 hyperparameter values set by the competition rules.5 The winning entries in the two most recent104 competitions used LARS with batch size 32,768 for 72 training epochs6 and LARS with batch size105 65,536 for 88 training epochs,7 respectively. Kumar et al. [2019] later improved the training time106 for batch size 32,768 by reaching the target accuracy in 64 epochs. These are currently the fastest107 published results on the ResNet-50 benchmark. However, it has been unclear whether LARS was108 necessary to achieve these training speeds since no recent published results or competitive MLPerf109 submissions have used another optimizer. In this section, we describe how we matched the 64 epoch,110 32,768 batch size result of LARS using standard Nesterov momentum.8111 A fair benchmark of training algorithms or hardware systems must account for stochasticity in112 individual training runs. In the MLPerf competition, the benchmark metric is the mean wall-clock113 time of 5 trials after the fastest and slowest trials are excluded. Only 4 out of the 5 trials need to reach114 the target accuracy and there is no explicit limit on the number of times an entrant can try a different115 set of 5 trials. Since our goal is to compare algorithms, rather than systems, we aim to match the116 LARS result in terms of training steps instead (but since Nesterov momentum is computationally117 simpler than LARS, this would also correspond to faster wall-clock time on an optimized system).118 Specifically, we measure the median validation accuracy over 50 training runs with a fixed budget of119 2,512 training steps9 at a batch size of 32,768. When we ran the published LARS training pipeline,10120 LARS achieved a median accuracy of 75.97% and reached the target in 35 out of 50 trials. We121 consider the LARS result to be matched by another optimizer if the median over 50 trials exceeds the122 target of 75.9%.123 2.1 Nesterov momentum at batch size 32k124 This section describes how we used the standard Nesterov momentum optimizer to train the ResNet-125 50 v1.5 on ImageNet to 75.9% validation accuracy in 2,512 update steps at a batch size of 32,768,126 matching the best published LARS result at this batch size. Although we implemented our own127 training program, the only logical changes we made to the published LARS pipeline were to the128 optimizer and the optimization hyperparameters. Our model implementation and data pre-processing129 pipeline were identical to those required under the MLPerf closed division rules (see Appendix B).130 5 https://git.io/JtknD 6 https://mlperf.org/training-results-0-6 7 https://mlperf.org/training-results-0-7 8 The 88 epoch, 65,536 batch size result is faster in terms of wall-clock time but requires more training epochs, indicating that it is beyond LARS’s perfect scaling regime. Although LARS obtains diminishing returns when increasing the batch size from 32,768 to 65,536, future work could investigate whether Nesterov momentum drops off more or less rapidly than LARS. 9 Corresponding to 64 training epochs in Kumar et al. [2019]. 10 https://git.io/JtsLQ We present two Nesterov momentum hyperparameter configurations that achieve comparable per-131 formance to LARS. Configuration A achieved a median accuracy of 75.97% (the same as LARS)132 and reached the target accuracy in 34 out of 50 trials. Configuration B is a modified version of133 Configuration A designed to make as few changes as possible to the LARS hyperparameters; it134 achieved a median accuracy of 75.92% and reached the target in 29 out of 50 trials. See Appendix D.1135 for the complete hyperparameter configurations.136 To achieve these results, we tuned the hyperparameters of the training pipeline from scratch using137 Nesterov momentum. We ran a series of experiments, each of which searched over a hand-designed138 hyperparameter search space using quasi-random search [Bousquet et al., 2017]. Between each139 experiment, we modified the previous search space and/or tweaked the training program to include140 optimization tricks and non-default hyperparameter values we discovered in the state-of-the-art LARS141 pipeline. The full sequence of experiments we ran, including the number of trials, hyperparameters142 tuned, and search space ranges, are provided in Appendix D.4. Once we had matched the LARS143 result with Configuration A, we tried setting each hyperparameter to its value in the LARS pipeline in144 order to find the minimal set of changes that still achieved the target result, producing Configuration145 B. The remainder of this section describes the hyperparameters we tuned and the techniques we146 applied on the journey to these results.147 2.1.1 Nesterov Momentum Optimizer148 Nesterov momentum is a variant of classical or “heavy-ball” momentum defined by the update rule149 vt+1 = µvt +∇`(θt), θt+1 = θt − ηt (µvt+1 +∇`(θt)) , where v0 = 0, θt is the vector of model parameters after t steps, ∇`(θt) is the gradient of the loss150 function `(θ) averaged over a batch of training examples, µ is the momentum, and ηt is the learning151 rate for step t. We prefer Nesterov momentum over classical momentum because it tolerates larger152 values of its momentum parameter [Sutskever et al., 2013] and sometimes outperforms classical153 momentum, although the two algorithms perform similarly on many tasks [Shallue et al., 2019, Choi154 et al., 2019]. We tuned the Nesterov momentum µ in Configurations A and B. We discuss the learning155 rate schedule {ηt} separately in Section 2.1.4.156 2.1.2 Batch normalization157 The ResNet-50 v1.5 model uses batch normalization [Ioffe and Szegedy, 2015], defined as158 BN(x(l)) = ( x(l) − mean(x(l))√ var(x(l)) + ) × γ(l) + β(l), where x(l) is a vector of pre-normalization outputs from layer l, mean(·) and var(·) denote the159 element-wise sample mean and variance across the batch of training examples,11 and γ(l) and β(l)160 are trainable model parameters.161 Batch normalization introduces the following tuneable hyperparameters: , the small constant added162 to the sample variance; the initial values of γ(l) and β(l); and ρ, which governs the exponential163 moving averages of the scaling factors used in evaluation. The LARS pipeline uses = 10−5 and164 ρ = 0.9. It sets the initial value of β(l) to 0.0 everywhere, but the initial value of γ(l) depends on165 the layer: it sets γ(l) to 0.0 in the final batch normalization layer of each residual block, and to 1.0166 everywhere else. In Configuration A, we tuned , ρ, and γ0, the initial value of γ(l) in the final batch167 normalization layer of each residual block. In Configuration B, we used the same values as LARS for168 and ρ, but we found that choosing γ0 between 0.0 and 1.0 was important for matching the LARS169 result with Nesterov momentum.170 2.1.3 Regularization171 In Configuration A, we tuned both the L2 regularization coefficient λ and label smoothing172 coefficient τ [Szegedy et al., 2016]. The LARS pipeline uses λ = 10−4 and τ = 0.1.173 11 In a distributed training environment the mean and variance are commonly computed over a subset of the full batch. The LARS pipeline uses a “virtual batch size” of 64, which we also use to avoid changing the training objective [Hoffer et al., 2017]. Crucially, the LARS pipeline does not apply L2 regularization to the bias variables of the174 ResNet model nor the batch normalization parameters γ(l) and β(l) (indeed, the published175 LARS pipeline does not even apply LARS to these parameters – it uses Heavy-ball momen-176 tum). This detail is extremely important for both LARS and Nesterov momentum to achieve177 the fastest training speed. Configuration B used the same λ and τ as Configuration A.178 179 2.1.4 Learning rate schedule180 The LARS pipeline uses a piecewise polynomial schedule181 ηt = ηinit + (ηpeak − ηinit) ( t twarmup )pwarmup , t ≤ twarmup ηfinal + (ηpeak − ηfinal) ( T−t T−twarmup )pdecay t > twarmup, with ηinit = 0.0, ηpeak = 29.0, ηfinal = 10−4, pwarmup = 1,182 pdecay = 2, and twarmup = 706 steps. In Configuration A, we re-183 tuned all of these hyperparameters with Nesterov momentum.184 In Configuration B, we set ηinit, pdecay, and twarmup to the same185 values as LARS, changing only pwarmup from 1 to 2 and re-186 scaling ηpeak and ηfinal.187 2.1.5 Comparing Nesterov momentum and LARS188 Table 1 shows the hyperparameter values for Configuration B that differ from the state-189 of-the-art LARS pipeline. Aside from re-tuning the momentum, learning rate scale, and190 regularization hyperparameters (whose optimal values are all expected to change with the191 optimizer), the only changes are setting pwarmup to 2 instead of 1 and re-tuning γ0.192 193 Figure 1 shows the LARS learning rate schedule com-194 pared to the Nesterov momentum schedule. Even though195 these schedules are similar, we found that each optimizer196 had a different optimal value of the warmup polynomial197 power. As Table 2 shows, Nesterov momentum performs198 better with pwarmup = 2 instead of 1, while the opposite199 is true with LARS. As discussed in Agarwal et al. [2020],200 optimizers can induce implicit step size schedules that201 strongly influence their training dynamics and solution202 quality, and it appears from Table 2 that the implicit step203 sizes of Nesterov momentum and LARS may evolve dif-204 ferently, causing the shapes of their optimal learning rate205 schedules to differ.206 Although the main concern of a practitioner is validation performance, the primary task of an207 optimization algorithm is to minimize training loss. Table 2 shows that Nesterov momentum achieves208 higher training accuracy than LARS, despite similar validation performance. Thus, it may be more209 appropriate to consider the layerwise normalization of LARS to be a regularization technique, rather210 than an optimization technique.211 Spending even more effort tuning LARS or Nesterov momentum would likely further improve the212 current state-of-the-art for that optimizer. Meaningful optimizer comparisons are only possible213 with independent and equally intensive tuning efforts, and we do not claim that either optimizer214 outperforms the other on this benchmark. That said, if the main evidence for LARS’s utility as a215 “large-batch optimizer” is its performance on this particular benchmark, then more evidence is needed216 to quantify any benefit it has over traditional, generic optimizers like Nesterov momentum.217 2.2 Lessons learned218 In hindsight, it was only necessary to make a few changes to the LARS pipeline to match its219 performance at batch size 32,768 with Nesterov momentum. However, Table 1 does not accurately220 represent the effort required when attempting to match a highly tuned training-speed benchmark.221 Firstly, as described in Sections 2.1.2 and 2.1.3, the strong results of LARS depend partly on a few222 subtle optimization tricks and non-default values of uncommonly-tuned hyperparameters. Fortunately,223 in this case we could discover these tricks by examining the open-source code required for MLPerf224 submissions, but machine learning research papers do not always report these important details.225 Researchers can easily waste a lot of experiments and produce misleading results before getting all of226 these details right. We demonstrate the importance of adding these tricks to our Nesterov momentum227 pipeline in Appendix C; without these tricks (or some new tricks), we likely would not have been228 able to match the LARS performance.229 Secondly, the learning rate schedule really matters when trying to maximize performance with a230 relatively small step budget. Both LARS and Nesterov momentum are sensitive to small deviations231 from the optimized learning rate schedules in Figure 1, and neither schedule works as well for the232 other optimizer. Although relatively minor changes were sufficient to match LARS with Nesterov233 momentum, there is no way to know a priori how the optimal schedule will look for a new optimizer234 Wu et al. [2018]. Even in toy settings where the optimal learning rate schedule can be derived, it235 does not fit into commonly used schedule families and depends strongly on the optimizer Zhang236 et al. [2019]. Indeed, this problem applies to the other optimization hyperparameters as well: it237 is extremely difficult to know which are worth considering ahead of time. Finally, even when we238 narrowed down our hyperparemeter search spaces around the optimal point, the volume of our search239 spaces corresponding to near-peak performance was small, likely due to the small step budget [Shallue240 et al., 2019]. We investigate how these effects change with a less stringent step budget in Section 4.241 3 Stronger BERT pretraining speed baselines242 You et al. [2019] developed the LAMB optimizer in the hope of speeding up training for BERT-Large243 [Bidirectional Encoder Representations from Transformers, Devlin et al., 2018]. BERT training244 consists of two phases. The “pretraining” phase has two objectives: (1) predicting masked tokens245 based on the rest of the sequence (a masked language model), and (2) predicting whether two246 given sentences follow one from another. Finally, the “fine-tuning” phase refines the model for a247 downstream task of interest. BERT pretraining takes a considerable amount of time (up to 3 days on248 16 Cloud TPU-v3 chips Jouppi et al. [2017]), whereas the fine-tuning phase is typically much faster.249 Model quality is typically assessed on the downstream metrics, not on pretraining loss, making BERT250 training a somewhat awkward benchmark for optimization research.251 You et al. [2019] used LAMB for BERT pretraining with batch sizes up to 65,536 and claimed that252 LAMB outperforms Adam batch size 16,384 and beyond. The LAMB optimizer has since appeared253 in several NLP toolkits, including as Microsoft DeepSpeed and NVIDIA Multi-node BERT training,254 and as a benchmark task in MLPerf v0.7.12255 As shown in Table 3, we trained Adam (with decoupled weight decay) baselines that achieve better256 results than both the LAMB and Adam results reported in You et al. [2019]. Our new Adam257 baselines obtain better F1 scores on the development set of the SQuaD v1.1 task in the same number258 of training steps as LAMB for both batch size 32,768 and the hybrid 65,536-then-32,768 batch259 size training regime in You et al. [2019]. We also ran Adam at batch size 65,536 to reach nearly260 the same F1 score as the hybrid batch size LAMB result, but in much fewer training steps. We261 believe 7,818 steps is a new state-of-the-art for BERT pretraining speed [in our experiments, it262 also improves upon the 76-minute record claimed in You et al., 2019]. Additionally, at batch263 size 32,768 our Adam baseline got a better pretraining loss of 1.277 compared to LAMB’s 1.342.264 12 We do not consider the MLPerf task in this paper since it is a warm-start, partial training task. 265 We used the same experimental setup as You266 et al. [2019], including two pretraining phases267 with max sequence lengths of 128 and then 512.268 In order to match You et al. [2019], we reported269 the F1 score on the downstream SQuaD v1.1270 task as the target metric, although this metric271 introduces potential confounds: optimization272 efficiency should be measured on the training273 task using training and held-out data sets. Fortunately, in this case better pretraining performance274 correlated a with higher F1 score after fine-tuning. See Appendix B.2 for additional experiment275 details. We tuned Adam hyperparameters independently for each pretraining phase, specifically276 learning rate η, β1, β2, the polynomial power for the learning rate warmup pwarmup, and weight277 decay λ, using quasi-random search [Bousquet et al., 2017]. See Appendix D.2 for the search spaces.278 In addition to hyperparmeter tuning, our improved Adam results at these batch sizes are also likely279 due to two implementation differences. First, the Adam implementation in You et al. [2019] comes280 from the BERT open source code base, in which Adam is missing the standard bias correction.13281 The Adam bias correction acts as an additional step size warm-up, thereby potentially improving the282 stability in the initial steps of training. Second, the BERT learning rate schedule had a discontinuity283 at the start of the decay phase due to the learning rate decay being incorrectly applied during warm-up284 14 (see Figure 2 in Appendix B). This peculiarity is part of the official BERT release and is present in285 3000+ copies of the BERT Training code on GitHub.286 4 Investigating a less stringent step budget287 Part of what makes comparing optimizers so difficult is that the hyperparameter tuning tends to288 dominate the comparisons [Choi et al., 2019]. Moreover, tuning becomes especially difficult when289 we demand a fixed epoch budget even when dramatically increasing the batch size [Shallue et al.,290 2019]. Fixing the epoch budget as the batch size increases is equivalent to demanding perfect scaling291 (i.e. that the number of training steps decreases by the same factor that the batch size is increased).292 We can view the role of hyperparameter tuning for large batch training as resisting the inevitable end293 of perfect scaling. For example, it might be possible to extend perfect scaling using delicately tuned294 learning rate schedules, but comparing optimizers under these conditions can make the learning rate295 schedule dominate the comparison by favoring some algorithms over others. Therefore, in order to296 better understand the behavior of LARS and LAMB compared to Nesterov Momentum and Adam, we297 ran additional ResNet-50 experiments with a more generous 6,000 step budget (vs 2,512 in Section 2)298 and a more simplistic cosine learning rate schedule. At batch size 32,768, this budget should let us299 reach better validation accuracy than the MLPerf target of 75.9%.300 Although not mentioned in You et al. [2017], the state-of-the-art MLPerf pipeline for “LARS” actually301 uses both LARS and Heavy-ball Momentum, with Momentum applied to the batch normalization and302 ResNet bias parameters and LARS applied to the other parameters. You et al. [2019] does not mention303 whether LAMB was only applied to some parameters and not others. If layerwise normalization can304 be harmful for some model parameters, this is critical information for practitioners using LARS or305 LAMB, since it might not be obvious which optimizer to apply to which parameters. To investigate306 this, we trained both pure LARS and LAMB configurations, as well as configurations that did not307 apply layerwise normalization to the batch normalization and ResNet bias parameters. Moreover,308 LAMB’s underlying Adam implementation defaults to = 10−6, rather than the typical 10−7 or309 10−8. In some cases, can be a critical hyperparameter for Adam [Choi et al., 2019], so we included310 Adam configurations with both = 10−6 and = 10−8.311 Table 4 shows the validation accuracy of these different configurations after training for 6,000312 steps with batch size 32,768. In every case, we used a simple cosine decay learning rate sched-313 ule and tuned the initial learning rate and weight decay using quasi-random search. We used314 momentum parameters of 0.98 for Nesterov momentum and 0.929 for LARS, respectively, based315 on the tuned values from Section 2. We used default hyperparameters for Adam and LAMB316 except where specified. We set all other hyperparameters to the same values as the state-of-the-317 art LARS pipeline, except we set γ0 = 1.0. See Appendix D.3 for more details. As expected,318 13 https://git.io/JtY8d 14 See https://git.io/JtnQW and https://git.io/JtnQ8. highly tuned learning rate schedules and optimizer hyperparameters are no longer necessary with319 a less stringent step budget. Multiple optimizer configurations in Table 4 exceed the MLPerf320 target accuracy of 75.9% at batch size 32,768 with minimal tuning. Training with larger batch321 sizes is not fundamentally unstable: stringent step budgets make hyperparameter tuning trickier.322 LAMB, are introduced alongside claims that337 the new optimizer does not require any—or at338 least minimal—tuning. Unfortunately, these339 claims require a lot of work to support, since340 they require trying the optimizer on new prob-341 lems without using those problems during the342 development of the algorithm. Although our ex-343 periments here are not sufficient to determine344 which optimizers are easiest to tune, experiments like these that operate outside the regime of highly345 tuned learning rate schedules can serve as a starting point. In this experiment, LARS and LAMB do346 not appear to have an advantage in how easy they are to tune even on a dataset and model that were347 used in the development of both of those algorithms. LAMB is a variant of Adam and performs about348 the same as Adam with the same value of ; LARS is more analogous to Momentum and indeed349 Nesterov momentum and LARS have similar performance.350 5 Discussion351 Our results show that standard, generic optimizers suffice for achieving strong results across batch352 sizes. Therefore, any research program to create new optimizers for training at larger batch sizes353 must start from the fact that Momentum, Adam, and likely other standard methods work fine at batch354 sizes as large as those considered in this paper. The LARS and LAMB update rules have no more355 to do with the batch size (or “large” batches) than the Momentum or Adam update rules. Although356 You et al. [2019] presented convergence rate bounds for LARS and LAMB to support their claims357 of superior performance, we show in Appendix A that Adam satisfies a similar bound to LAMB.358 These bounds all rely on very unrealistic assumptions.15 Most of all, they are loose upper bounds359 on the worst case behavior of the algorithms, not accurate reflections of optimizer performance in360 reality. Whether layer-wise normalization can be useful for optimization or regularization remains an361 open question. However, if LARS and LAMB have any advantage over standard techniques, it is not362 that they work dramatically better on the tasks and batch sizes in You et al. [2017, 2019]. This is363 not to suggest that there is nothing interesting about studying neural network optimization at larger364 batch sizes. For example, as gradient noise decreases, there may be opportunities to harness curvature365 information and extend the region of perfect scaling [Zhang et al., 2019]. However, there is currently366 no evidence that LARS and LAMB scale better than Momentum and Adam.367 Our primary concern in this paper has been matching the state of the art—and establishing new368 baselines—for training speed measurements of the sort used to justify new techniques and algorithms369 for training with larger batch sizes. In contrast, many practitioners are more concerned with obtaining370 the best possible validation error with a somewhat flexible training time budget. Part of the reason371 why matching LARS at batch size 32,768 was non-trivial is because getting state of the art training372 15 All convergence bounds assume no momentum is used, and the Lavg bound for LAMB also assumes β2 = 0, when it is typically 0.999. Additionally, Lavg could still be large if L∞ is large, but we leave an empirical analysis of this to future work. speed requires several tricks and implementation details that are not often discussed. It was not373 obvious to us a priori which ones would prove crucial. These details do not involve changes to the374 optimizer, but they interact with the optimizer in a regime where all hyperparameters need to be well375 tuned to stay competitive, making it necessary to re-tune everything for a new optimizer.376 In neural network optimization research, training loss is rarely discussed in detail and evaluation377 centers on validation/test performance since that is what practitioners care most about. However,378 although we shouldn’t only consider training loss, it is counter-intuitive and counter-productive to379 elide a careful investigation of the actual objective of the optimizer. If a new optimizer achieves better380 test performance, but shows no speedup on training loss, then perhaps it is not a better optimizer so381 much as an indirect regularizer. 16 Indeed, in our experiments we found that Nesterov momentum382 achieves noticeably better training accuracy on ResNet-50 than the LARS configuration we used,383 despite reaching roughly the same validation accuracy. Properly disentangling possible regularization384 benefits from optimization speed-ups is crucial if we are to understand neural network training,385 especially at larger batch sizes where we lose some of the regularization effect of gradient noise.386 Hypothetically, if the primary benefit of a training procedure is regularization, then it would be better387 to compare the method with other regularization baselines than other optimizers.388 Ultimately, we only care about batch size to the extent that higher degrees of data parallelism lead389 to faster training. Training with a larger batch size is a means, not the end goal. New optimizers—390 whether designed for generic batch sizes or larger batch sizes—have the potential to dramatically391 improve algorithmic efficiency across multiple workloads, but our results show that standard opti-392 mizers can match the performance of newer alternatives on the workloads we considered. Indeed,393 despite the legion of new update rule variants being proposed in the literature, standard Adam and394 Momentum remain the workhorses of practitioners and researchers alike, while independent empirical395 comparisons consistently find no clear winner when optimizers are compared across a variety of396 workloads [Schmidt et al., 2020]. Meanwhile, as Choi et al. [2019] and our results underscore,397 comparisons between optimizers crucially depend on the effort spent tuning hyperparameters for each398 optimizer. Given these facts, we should regard with extreme caution studies claiming to show the399 superiority of one particular optimizer over others. Part of the issue stems from current incentives in400 the research community; we overvalue the novelty of new methods and undervalue establishing strong401 baselines to measure progress against. This is particularly problematic in the study of optimizers,402 where the learning rate schedule is arguably more important than the choice of the optimizer update403 rule itself! As our results show, the best learning rate schedule is tightly coupled with the optimizer,404 meaning that tuning the learning rate schedule for a new optimizer will generally favor the new405 optimizer over a baseline unless the schedule of the baseline is afforded the same tuning effort.406 6 Conclusion407 In this work, we demonstrated that standard optimizers, without any layer-wise normalization408 techniques, can match or exceed the large batch size results used to justify LARS and LAMB. Future409 work attempting to argue that a new algorithm is useful by comparing to baseline methods or results,410 including those established in this paper, faces a key challenge in showing that the gains are due to the411 new method and not merely due to better tuning or changes to the training pipeline (e.g. regularization412 tricks). Although gains from tuning will eventually saturate, we can, in principle, always invest more413 effort in tuning and potentially get better results for any optimizer. However, our goal should be414 developing optimizers that work better across many different workloads when taking into account the415 amount of additional tuning they require.416 Moving forward, if we are to reliably make progress we need to rethink how we compare and evaluate417 new optimizers for neural network training. Given how sensitive optimizer performance is to the418 hyperparameter tuning protocol and how difficult it is to quantify hyperparameter tuning effort, we419 can’t expect experiments with self-reported baselines to always lead to fair comparisons. Ideally, new420 training methods would be evaluated in a standardized competitive benchmark, where submitters of421 new optimizers do not have full knowledge of the evaluation workloads. Some efforts in this direction422 have started, for instance the MLCommons Algorithmic Efficiency Working Group17, but more work423 needs to be done to produce incentives for the community to publish well-tuned baselines and to424 reward researchers that conduct the most rigorous empirical comparisons.425 16 Deep learning folk wisdom is that “any method to make training less effective can serve as a regularizer,” whether it is a bug in gradients or a clever algorithm. 17 https://mlcommons.org/en/groups/research-algorithms/ Checklist426 1. For all authors...427 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s428 contributions and scope? [Yes] See Sections 2, 3, 4429 (b) Did you describe the limitations of your work? [Yes] We had a lengthy discussion of430 the limitations and scope of the work in Section 5431 (c) Did you discuss any potential negative societal impacts of your work? [No] We did432 not discuss this in the main text. Our primary contribution is to improve experimental433 protocols for other methodological work, which is so removed from specific machine434 learning applications that it is hard to determine the net impact. That said, more435 effective experimental protocols should lead to more effective science which in turn436 should lead to more effective machine learning applications. Whether this development437 is positive or negative for society will depend on who stands to gain from the use of438 machine learning in future applied contexts. Additionally, although our work should, in439 the long run, save computational resources for individual researchers, in net across the440 community this may or may not produce an aggregate savings because more efficient441 machine learning training, by making larger scale projects more accessible, can lead442 to an increased demand for compute resources [York, 2006], which can have varying443 degrees of negative environmental impacts [Patterson et al., 2021].444 (d) Have you read the ethics review guidelines and ensured that your paper conforms to445 them? [Yes]446 2. If you are including theoretical results...447 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Appendix A448 for a comprehensive description of the problem setting.449 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix A.450 3. If you ran experiments...451 (a) Did you include the code, data, and instructions needed to reproduce the main experi-452 mental results (either in the supplemental material or as a URL)? [No] We will include453 a link to all code and all possible reproducibility instructions after the anonymized454 reviewing period is over.455 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they456 were chosen)? [Yes] We are extremely detailed about our tuning procedures and dataset457 details, see Appendices B, D.458 (c) Did you report error bars (e.g., with respect to the random seed after running experi-459 ments multiple times)? [Yes] While we do not report error bars in the tables in the main460 text, Appendices B.2, C contains box plots showing the quartiles of the distribution461 over random seeds.462 (d) Did you include the total amount of compute and the type of resources used (e.g., type463 of GPUs, internal cluster, or cloud provider)? [No] In Appendix B we state that we464 run on Google TPUs, however we do not tally up the total number of experiments run465 (although an interested reader could compute it from the information we provided in466 our detailed appendices given that we list all intermediate experiments, no matter how467 silly in hindsight).468 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...469 (a) If your work uses existing assets, did you cite the creators? [Yes] We reference the470 relevant citations for all models, datasets, and techniques.471 (b) Did you mention the license of the assets? [No]472 (c) Did you include any new assets either in the supplemental material or as a URL? [No]473 (d) Did you discuss whether and how consent was obtained from people whose data you’re474 using/curating? [N/A]475 (e) Did you discuss whether the data you are using/curating contains personally identifiable476 information or offensive content? [N/A]477 5. If you used crowdsourcing or conducted research with human subjects...478 (a) Did you include the full text of instructions given to participants and screenshots, if479 applicable? [N/A]480 (b) Did you describe any potential participant risks, with links to Institutional Review481 Board (IRB) approvals, if applicable? [N/A]482 (c) Did you include the estimated hourly wage paid to participants and the total amount483 spent on participant compensation? [N/A]484 References485 Martı́n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.486 Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew487 Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath488 Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,489 Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent490 Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg,491 Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on492 heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from493 tensorflow.org.494 Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, and Cyril Zhang. Disentangling adaptive495 gradient methods from learning rates. arXiv preprint arXiv:2002.11803, 2020.496 Olivier Bousquet, Sylvain Gelly, Karol Kurach, Olivier Teytaud, and Damien Vincent. Critical hyper-497 parameters: No random, no cry. arXiv, 2017. URL https://arxiv.org/abs/1706.03200.498 James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal499 Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and500 Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL501 http://github.com/google/jax.502 Dami Choi, Christopher J Shallue, Zachary Nado, Jaehoon Lee, Chris J Maddison, and George E503 Dahl. On empirical comparisons of optimizers for deep learning. arXiv preprint arXiv:1910.05446,504 2019.505 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep506 bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.507 Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the gen-508 eralization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741,509 2017.510 Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by511 reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.512 Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa,513 Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of514 a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer515 Architecture, pages 1–12, 2017.516 Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint517 arXiv:1412.6980, 2014.518 Sameer Kumar, Victor Bitorff, Dehao Chen, Chiachen Chou, Blake Hechtman, HyoukJoong Lee,519 Naveen Kumar, Peter Mattson, Shibo Wang, Tao Wang, et al. Scale mlperf-0.6 models on google520 tpu-v3 pods. arXiv preprint arXiv:1909.09756, 2019.521 Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson,522 Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debojy-523 oti Dutta, Udit Gupta, Kim Hazelwood, Andrew Hock, Xinyuan Huang, Atsushi Ike, Bill Jia,524 Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Guokai Ma, Deepak Narayanan, Tayo525 Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St.526 John, Tsuguchika Tabaru, Carole-Jean Wu, Lingjie Xu, Masafumi Yamazaki, Cliff Young, and527 Matei Zaharia. MLPerf training benchmark. arXiv preprint arXiv:1910.01500, 2019. URL528 https://arxiv.org/abs/1910.01500.529 Yurii E Nesterov. A method for solving the convex programming problem with convergence rate530 O(1/kˆ2). In Dokl. akad. nauk Sssr, volume 269, pages 543–547, 1983.531 David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild,532 David So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training. arXiv533 preprint arXiv:2104.10350, 2021.534 Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR535 Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.536 Robin M Schmidt, Frank Schneider, and Philipp Hennig. Descending through a crowded valley–537 benchmarking deep learning optimizers. arXiv preprint arXiv:2007.01547, 2020.538 Christopher J Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and539 George E Dahl. Measuring the effects of data parallelism on neural network training. Journal of540 Machine Learning Research, 20(112):1–49, 2019.541 Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization542 and momentum in deep learning. In ICML, 2013.543 Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking544 the inception architecture for computer vision. In Proceedings of the IEEE conference on computer545 vision and pattern recognition, pages 2818–2826, 2016.546 Yu Emma Wang, Gu-Yeon Wei, and David Brooks. Benchmarking tpu, gpu, and cpu platforms for547 deep learning. arXiv preprint arXiv:1907.10701, 2019.548 Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in549 stochastic meta-optimization. arXiv preprint arXiv:1803.02021, 2018.550 Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, and Youlong Cheng. Image classification at551 supercomputer scale. arXiv preprint arXiv:1811.06992, 2018.552 Richard York. Ecological paradoxes: William stanley jevons and the paperless office. Human Ecology553 Review, pages 143–147, 2006.554 Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv555 preprint arXiv:1708.03888, 2017.556 Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan557 Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep558 learning: Training bert in 76 minutes. In International Conference on Learning Representations,559 2019.560 Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George Dahl, Chris561 Shallue, and Roger B Grosse. Which algorithmic choices matter at which batch sizes? insights562 from a noisy quadratic model. In Advances in Neural Information Processing Systems, pages563 8196–8207, 2019.564 Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and565 Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching566 movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer567 Vision (ICCV), ICCV ’15, page 19–27, USA, 2015. IEEE Computer Society. ISBN 9781467383912.568 doi: 10.1109/ICCV.2015.11. URL https://doi.org/10.1109/ICCV.2015.11.569
1. What is the focus of the paper regarding neural network training? 2. What are the strengths of the proposed approach compared to prior works? 3. Do you have any concerns or questions about the paper's content? 4. How does the reviewer assess the significance and impact of the paper's findings? 5. Are there any limitations or potential biases in the paper's methodology or conclusions?
Summary Of The Paper Review
Summary Of The Paper In this work, the authors compared the standard optimizers (i.e., Nesterov momentum, Adam) and optimizers with layer-wise normalization (i.e., LARS, LAMB) in training neural networks with large-batch sizes. Although LARS and LAMB were proposed and known as "large-batch optimizers," the authors showed that the standard optimizers with careful hyperparameter tuning could match or exceed the state-of-the-art results by LARS (for training ResNet-50 on ImageNet, batch size = 32K) and LAMB (for pre-training BEAT, batch size = 65K) at large-batch settings within the same step budgets. The authors pointed out that the strong results depend on several optimization/regularization tricks, non-default values of uncommonly-tuned hyperparameters, and a careful learning rate schedule, but such details are not often discussed. With this point, the difficulties of comparing optimizers and the importance of rigorous empirical comparisons and open-sourcing in deep learning optimizer research are highlighted. Review This paper is well-organized, and the background, experimental methods, results, and lessons are clearly presented. Based on their rigorous empirical comparisons, they showed the standard optimizers (i.e., Nesterov momentum, Adam) could much or exceed the results of the optimizers designed for large-batch settings (i.e., LARS, LAMB). Since it has been believed that we cannot use such standard optimizers in large-batch settings, their results are surprising and make us re-examine how we should compare optimizers and whether we really need new optimizers. Although their results are only in large-batch settings, it is expected that we can observe the same phenomena (i.e., standard optimizers can meet or exceed the results of optimizers tailored for a specific task) in other settings. This observation is beneficial, especially for practitioners. In particular, I agree with the authors' claim: the importance of uncommonly-tuned hyperparameters, learning rate schedule, and disentangling optimization speed and regularization has not been discussed in detail. And I appreciate this work for shedding light on this point with convincing empirical results. I believe the community should appreciate the points made in this paper. Minor comments: L93: the citation to Schmidt et al. [2020] is repeated. ++++++++++++++++++++++++++++++++ Update after seeing the other reviews and the authors' response As some reviewers (and the authors) mentioned, the importance of hyperparameter tuning and the fact that “there is no clear winner” are already well-known (and this is why standard optimizers such as Momentum SGD and Adam are still the most popular choices because of their simplicity). However, LARS/LAMB has been used as the first choice in large-batch settings, but there is not enough evidence and discussion that it generally works better than the standard optimizers. This paper is the first study to point this out and show that the standard optimizers still work even in large-batch settings. For the above reason, I believe this paper provides an important lesson for readers, and I wish to keep recommending to accept. But I also agree with other reviewers that "contribution is too narrow" (the readers interested in large-batch training may be limited.) So I updated my score from 8 to 7.
NIPS
Title A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes Abstract Recently the LARS and LAMB optimizers have been proposed for training neural 1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and 3 have become popular in prominent benchmarks and deep learning libraries. How4 ever, without fair comparisons to standard optimizers, it remains an open question 5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In 6 this work we demonstrate that standard optimization algorithms such as Nesterov 7 momentum and Adam can match or exceed the results of LARS and LAMB at large 8 batch sizes. Our results establish new, stronger baselines for future comparisons 9 at these batch sizes and shed light on the difficulties of comparing optimizers for 10 neural network training more generally. 11 N/A Recently the LARS and LAMB optimizers have been proposed for training neural1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal-2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and3 have become popular in prominent benchmarks and deep learning libraries. How-4 ever, without fair comparisons to standard optimizers, it remains an open question5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In6 this work we demonstrate that standard optimization algorithms such as Nesterov7 momentum and Adam can match or exceed the results of LARS and LAMB at large8 batch sizes. Our results establish new, stronger baselines for future comparisons9 at these batch sizes and shed light on the difficulties of comparing optimizers for10 neural network training more generally.11 1 Introduction12 In recent years, hardware systems employing GPUs and TPUs have enabled neural network training13 programs to process dramatically more data in parallel than ever before. The most popular way to14 exploit these systems is to increase the batch size in the optimization algorithm (i.e. the number15 of training examples processed per training step). On many workloads, modern systems can scale16 to larger batch sizes without significantly increasing the time per step [Jouppi et al., 2017, Wang17 et al., 2019], thus proportionally increasing the number of training examples processed per second.18 If researchers can use this increased throughput to reduce the time required to train each neural19 network, then they should achieve better results by training larger models, using larger datasets, and20 by exploring new ideas more rapidly.21 As the capacity for data parallelism continues to increase, practitioners can take their existing,22 well-tuned training configurations and re-train with larger batch sizes, hoping to achieve the same23 performance in less training time [e.g. Ying et al., 2018]. On an idealized data-parallel system with24 negligible overhead from increasing the batch size, they might hope to achieve perfect scaling, a25 proportional reduction in training time as the batch size increases.26 However, achieving perfect scaling is not always straightforward. Changing the batch size changes27 the training dynamics, requiring the training hyperparameters (e.g. learning rate) to be carefully28 re-tuned in order to maintain the same level of validation performance.1 In addition, smaller batch29 sizes provide implicit regularization from gradient noise that may need to be replaced by other forms30 of regularization when the batch size is increased. Finally, even with perfect tuning, increasing31 1 Although there are heuristics for adjusting the learning rate as the batch size changes, these heuristics inevitably break down sufficiently far from the initial batch size and it is also not clear how to apply them to other training hyperparameters (e.g. momentum). Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. the batch size eventually produces diminishing returns. After a critical batch size, the number of32 training steps cannot be decreased in proportion to the batch size – the number of epochs must33 increase to match the validation performance of the smaller batch size. See Shallue et al. 2019 for a34 survey of the effects of data parallelism on neural network training. Once these effects are taken into35 account, there is no strong evidence that increasing the batch size degrades the maximum achievable36 performance on any workload. At the same time, the ever-increasing capacity for data parallelism37 presents opportunities for new regularization techniques that can replace the gradient noise of smaller38 batch sizes and new optimization algorithms that can extend perfect scaling to larger batch sizes by39 using more sophisticated gradient information [Zhang et al., 2019].40 You et al. [2017] proposed the LARS optimization algorithm in the hope of speeding up neural41 network training by exploiting larger batch sizes. LARS is a variant of stochastic gradient descent42 (SGD) with momentum [Polyak, 1964] that applies layer-wise normalization before applying each43 gradient update. Although it is difficult to draw strong conclusions from the results presented in the44 LARS paper, 2 the MLPerf3 Training benchmark4 adopted LARS as one of two allowed algorithms45 in the closed division for ResNet-50 on ImageNet and it became the de facto standard algorithm for46 that benchmark task. With MLPerf entrants competing to find the fastest-training hyperparameters47 for LARS, the first place submissions in the two most recent MLPerf Training competitions used48 LARS to achieve record training speeds with batch sizes of 32,678 and 65,536, respectively. No49 publications or competitive submissions to MLPerf have attempted to match these results with a50 standard optimizer (e.g. Momentum or Adam). However, MLPerf entrants do not have a strong51 incentive (nor are necessarily permitted by the rules) to explore other algorithms because MLPerf52 Training is a systems benchmark that requires algorithmic equivalence between submissions to make53 fair comparisons. Moreover, since the main justification for LARS is its excellent performance on54 ResNet-50 at large batch sizes, more work is needed to quantify any benefit of LARS over standard55 algorithms at any batch size.56 You et al. [2019] later proposed the LAMB optimizer to speed up pre-training for BERT [Devlin57 et al., 2018] using larger batch sizes after concluding that LARS was not effective across workloads.58 LAMB is a variant of Adam [Kingma and Ba, 2014] that adds a similar layer-wise normalization step59 to LARS. You et al. [2019] used LAMB for BERT pre-training with batch sizes up to 65,536 and60 claimed that Adam cannot match the performance of LAMB beyond batch size 16,384.61 In this paper, we demonstrate that standard optimizers, without any layer-wise normalization tech-62 niques, can match or improve upon the large batch size results used to justify LARS and LAMB. In63 Section 2, we show that Nesterov momentum [Nesterov, 1983] matches the performance of LARS on64 the ResNet-50 benchmark with batch size 32,768. We are the first to match this result with a standard65 optimizer. In Section 3, contradicting the claims in You et al. [2019], we show that Adam obtains66 better BERT pre-training results than LAMB at the largest batch sizes, resulting in better downstream67 performance metrics after fine-tuning.68 In addition, we establish a new state-of-the-art for BERT pretraining speed, reaching an F1 score of69 90.46 in 7,818 steps using Adam at batch size 65,536 (we report training speed in steps because our70 focus is algorithmic efficiency, but since we compare LARS and LAMB to simpler optimizers, fewer71 training steps corresponds to faster wall-time in an optimized implementation – our BERT result72 with Adam also improves upon the wall-time record of LAMB reported in You et al. 2019). Taken73 together, our results establish stronger training speed baselines for these tasks and batch sizes, which74 we hope will assist future work aiming to accelerate training using larger batch sizes.75 In addition to the contributions mentioned above, we demonstrate several key effects that are often76 overlooked by studies aiming to establish the superiority of new optimization algorithms. We show77 that future work must carefully disentangle regularization and optimization effects when comparing a78 new optimizer to baselines. We also report several under-documented details used to generate the79 best LARS and LAMB results, a reminder that future comparisons should document any novel tricks80 and include them in baselines. Finally, our results add to existing evidence in the literature on the81 difficulty of performing independently rigorous hyperparameter tuning for optimizers and baselines.82 2 The modified AlexNet on ImageNet benchmark did not have well-established accuracy targets from prior work and LARS used a more general learning rate schedule than the momentum baseline. For ResNet-50 on ImageNet, LARS achieved sub-par accuracy numbers and was not compared to any other optimizer at the same batch size, leaving open the possibility that a generic optimizer would scale just as well as LARS. 3 MLPerf is a trademark of MLCommons.org. 4 https://mlperf.org/training-overview In particular, we show that the optimal shape of the learning rate schedule is optimizer-dependent (in83 addition to the scale), and that differences in the schedule can dominate optimizer comparisons at84 smaller step budgets and become less important at larger step budgets.85 1.1 Related work86 Shallue et al. [2019] and Zhang et al. [2019] explored the effects of data parallelism on neural network87 training for different optimizers, finding no evidence that larger batch sizes degrade performance88 and demonstrating that different optimizers can achieve perfect scaling up to different critical batch89 sizes. You et al. [2017, 2019] developed the LARS and LAMB optimizers in the hope of speeding up90 training by achieving perfect scaling beyond standard optimizers. Many other recent papers have91 proposed new optimization algorithms for generic batch sizes or larger batch sizes [see Schmidt92 et al., 2020]. Choi et al. [2019] and Schmidt et al. [2020] demonstrated the difficulties with fairly93 comparing optimizers, showing that the hyperparameter tuning protocol is a key determinant of94 optimizer rankings. The MLPerf Training benchmark [Mattson et al., 2019] provides a competitive95 ranking of neural network training systems, but does not shed much light on the relative performance96 of optimizers because entrants are limited in the algorithms they can use and the hyperparameters97 they can tune.98 2 Matching LARS on ImageNet99 The MLPerf training benchmark for ResNet-50 v1.5 on ImageNet [Mattson et al., 2019] aims to100 reach 75.9% validation accuracy in the shortest possible wall-clock time. In the closed division of101 the competition, entrants must choose between two optimizers, SGD with momentum or LARS, and102 are only allowed to tune a specified subset of the optimization hyperparameters, with the remaining103 hyperparameter values set by the competition rules.5 The winning entries in the two most recent104 competitions used LARS with batch size 32,768 for 72 training epochs6 and LARS with batch size105 65,536 for 88 training epochs,7 respectively. Kumar et al. [2019] later improved the training time106 for batch size 32,768 by reaching the target accuracy in 64 epochs. These are currently the fastest107 published results on the ResNet-50 benchmark. However, it has been unclear whether LARS was108 necessary to achieve these training speeds since no recent published results or competitive MLPerf109 submissions have used another optimizer. In this section, we describe how we matched the 64 epoch,110 32,768 batch size result of LARS using standard Nesterov momentum.8111 A fair benchmark of training algorithms or hardware systems must account for stochasticity in112 individual training runs. In the MLPerf competition, the benchmark metric is the mean wall-clock113 time of 5 trials after the fastest and slowest trials are excluded. Only 4 out of the 5 trials need to reach114 the target accuracy and there is no explicit limit on the number of times an entrant can try a different115 set of 5 trials. Since our goal is to compare algorithms, rather than systems, we aim to match the116 LARS result in terms of training steps instead (but since Nesterov momentum is computationally117 simpler than LARS, this would also correspond to faster wall-clock time on an optimized system).118 Specifically, we measure the median validation accuracy over 50 training runs with a fixed budget of119 2,512 training steps9 at a batch size of 32,768. When we ran the published LARS training pipeline,10120 LARS achieved a median accuracy of 75.97% and reached the target in 35 out of 50 trials. We121 consider the LARS result to be matched by another optimizer if the median over 50 trials exceeds the122 target of 75.9%.123 2.1 Nesterov momentum at batch size 32k124 This section describes how we used the standard Nesterov momentum optimizer to train the ResNet-125 50 v1.5 on ImageNet to 75.9% validation accuracy in 2,512 update steps at a batch size of 32,768,126 matching the best published LARS result at this batch size. Although we implemented our own127 training program, the only logical changes we made to the published LARS pipeline were to the128 optimizer and the optimization hyperparameters. Our model implementation and data pre-processing129 pipeline were identical to those required under the MLPerf closed division rules (see Appendix B).130 5 https://git.io/JtknD 6 https://mlperf.org/training-results-0-6 7 https://mlperf.org/training-results-0-7 8 The 88 epoch, 65,536 batch size result is faster in terms of wall-clock time but requires more training epochs, indicating that it is beyond LARS’s perfect scaling regime. Although LARS obtains diminishing returns when increasing the batch size from 32,768 to 65,536, future work could investigate whether Nesterov momentum drops off more or less rapidly than LARS. 9 Corresponding to 64 training epochs in Kumar et al. [2019]. 10 https://git.io/JtsLQ We present two Nesterov momentum hyperparameter configurations that achieve comparable per-131 formance to LARS. Configuration A achieved a median accuracy of 75.97% (the same as LARS)132 and reached the target accuracy in 34 out of 50 trials. Configuration B is a modified version of133 Configuration A designed to make as few changes as possible to the LARS hyperparameters; it134 achieved a median accuracy of 75.92% and reached the target in 29 out of 50 trials. See Appendix D.1135 for the complete hyperparameter configurations.136 To achieve these results, we tuned the hyperparameters of the training pipeline from scratch using137 Nesterov momentum. We ran a series of experiments, each of which searched over a hand-designed138 hyperparameter search space using quasi-random search [Bousquet et al., 2017]. Between each139 experiment, we modified the previous search space and/or tweaked the training program to include140 optimization tricks and non-default hyperparameter values we discovered in the state-of-the-art LARS141 pipeline. The full sequence of experiments we ran, including the number of trials, hyperparameters142 tuned, and search space ranges, are provided in Appendix D.4. Once we had matched the LARS143 result with Configuration A, we tried setting each hyperparameter to its value in the LARS pipeline in144 order to find the minimal set of changes that still achieved the target result, producing Configuration145 B. The remainder of this section describes the hyperparameters we tuned and the techniques we146 applied on the journey to these results.147 2.1.1 Nesterov Momentum Optimizer148 Nesterov momentum is a variant of classical or “heavy-ball” momentum defined by the update rule149 vt+1 = µvt +∇`(θt), θt+1 = θt − ηt (µvt+1 +∇`(θt)) , where v0 = 0, θt is the vector of model parameters after t steps, ∇`(θt) is the gradient of the loss150 function `(θ) averaged over a batch of training examples, µ is the momentum, and ηt is the learning151 rate for step t. We prefer Nesterov momentum over classical momentum because it tolerates larger152 values of its momentum parameter [Sutskever et al., 2013] and sometimes outperforms classical153 momentum, although the two algorithms perform similarly on many tasks [Shallue et al., 2019, Choi154 et al., 2019]. We tuned the Nesterov momentum µ in Configurations A and B. We discuss the learning155 rate schedule {ηt} separately in Section 2.1.4.156 2.1.2 Batch normalization157 The ResNet-50 v1.5 model uses batch normalization [Ioffe and Szegedy, 2015], defined as158 BN(x(l)) = ( x(l) − mean(x(l))√ var(x(l)) + ) × γ(l) + β(l), where x(l) is a vector of pre-normalization outputs from layer l, mean(·) and var(·) denote the159 element-wise sample mean and variance across the batch of training examples,11 and γ(l) and β(l)160 are trainable model parameters.161 Batch normalization introduces the following tuneable hyperparameters: , the small constant added162 to the sample variance; the initial values of γ(l) and β(l); and ρ, which governs the exponential163 moving averages of the scaling factors used in evaluation. The LARS pipeline uses = 10−5 and164 ρ = 0.9. It sets the initial value of β(l) to 0.0 everywhere, but the initial value of γ(l) depends on165 the layer: it sets γ(l) to 0.0 in the final batch normalization layer of each residual block, and to 1.0166 everywhere else. In Configuration A, we tuned , ρ, and γ0, the initial value of γ(l) in the final batch167 normalization layer of each residual block. In Configuration B, we used the same values as LARS for168 and ρ, but we found that choosing γ0 between 0.0 and 1.0 was important for matching the LARS169 result with Nesterov momentum.170 2.1.3 Regularization171 In Configuration A, we tuned both the L2 regularization coefficient λ and label smoothing172 coefficient τ [Szegedy et al., 2016]. The LARS pipeline uses λ = 10−4 and τ = 0.1.173 11 In a distributed training environment the mean and variance are commonly computed over a subset of the full batch. The LARS pipeline uses a “virtual batch size” of 64, which we also use to avoid changing the training objective [Hoffer et al., 2017]. Crucially, the LARS pipeline does not apply L2 regularization to the bias variables of the174 ResNet model nor the batch normalization parameters γ(l) and β(l) (indeed, the published175 LARS pipeline does not even apply LARS to these parameters – it uses Heavy-ball momen-176 tum). This detail is extremely important for both LARS and Nesterov momentum to achieve177 the fastest training speed. Configuration B used the same λ and τ as Configuration A.178 179 2.1.4 Learning rate schedule180 The LARS pipeline uses a piecewise polynomial schedule181 ηt = ηinit + (ηpeak − ηinit) ( t twarmup )pwarmup , t ≤ twarmup ηfinal + (ηpeak − ηfinal) ( T−t T−twarmup )pdecay t > twarmup, with ηinit = 0.0, ηpeak = 29.0, ηfinal = 10−4, pwarmup = 1,182 pdecay = 2, and twarmup = 706 steps. In Configuration A, we re-183 tuned all of these hyperparameters with Nesterov momentum.184 In Configuration B, we set ηinit, pdecay, and twarmup to the same185 values as LARS, changing only pwarmup from 1 to 2 and re-186 scaling ηpeak and ηfinal.187 2.1.5 Comparing Nesterov momentum and LARS188 Table 1 shows the hyperparameter values for Configuration B that differ from the state-189 of-the-art LARS pipeline. Aside from re-tuning the momentum, learning rate scale, and190 regularization hyperparameters (whose optimal values are all expected to change with the191 optimizer), the only changes are setting pwarmup to 2 instead of 1 and re-tuning γ0.192 193 Figure 1 shows the LARS learning rate schedule com-194 pared to the Nesterov momentum schedule. Even though195 these schedules are similar, we found that each optimizer196 had a different optimal value of the warmup polynomial197 power. As Table 2 shows, Nesterov momentum performs198 better with pwarmup = 2 instead of 1, while the opposite199 is true with LARS. As discussed in Agarwal et al. [2020],200 optimizers can induce implicit step size schedules that201 strongly influence their training dynamics and solution202 quality, and it appears from Table 2 that the implicit step203 sizes of Nesterov momentum and LARS may evolve dif-204 ferently, causing the shapes of their optimal learning rate205 schedules to differ.206 Although the main concern of a practitioner is validation performance, the primary task of an207 optimization algorithm is to minimize training loss. Table 2 shows that Nesterov momentum achieves208 higher training accuracy than LARS, despite similar validation performance. Thus, it may be more209 appropriate to consider the layerwise normalization of LARS to be a regularization technique, rather210 than an optimization technique.211 Spending even more effort tuning LARS or Nesterov momentum would likely further improve the212 current state-of-the-art for that optimizer. Meaningful optimizer comparisons are only possible213 with independent and equally intensive tuning efforts, and we do not claim that either optimizer214 outperforms the other on this benchmark. That said, if the main evidence for LARS’s utility as a215 “large-batch optimizer” is its performance on this particular benchmark, then more evidence is needed216 to quantify any benefit it has over traditional, generic optimizers like Nesterov momentum.217 2.2 Lessons learned218 In hindsight, it was only necessary to make a few changes to the LARS pipeline to match its219 performance at batch size 32,768 with Nesterov momentum. However, Table 1 does not accurately220 represent the effort required when attempting to match a highly tuned training-speed benchmark.221 Firstly, as described in Sections 2.1.2 and 2.1.3, the strong results of LARS depend partly on a few222 subtle optimization tricks and non-default values of uncommonly-tuned hyperparameters. Fortunately,223 in this case we could discover these tricks by examining the open-source code required for MLPerf224 submissions, but machine learning research papers do not always report these important details.225 Researchers can easily waste a lot of experiments and produce misleading results before getting all of226 these details right. We demonstrate the importance of adding these tricks to our Nesterov momentum227 pipeline in Appendix C; without these tricks (or some new tricks), we likely would not have been228 able to match the LARS performance.229 Secondly, the learning rate schedule really matters when trying to maximize performance with a230 relatively small step budget. Both LARS and Nesterov momentum are sensitive to small deviations231 from the optimized learning rate schedules in Figure 1, and neither schedule works as well for the232 other optimizer. Although relatively minor changes were sufficient to match LARS with Nesterov233 momentum, there is no way to know a priori how the optimal schedule will look for a new optimizer234 Wu et al. [2018]. Even in toy settings where the optimal learning rate schedule can be derived, it235 does not fit into commonly used schedule families and depends strongly on the optimizer Zhang236 et al. [2019]. Indeed, this problem applies to the other optimization hyperparameters as well: it237 is extremely difficult to know which are worth considering ahead of time. Finally, even when we238 narrowed down our hyperparemeter search spaces around the optimal point, the volume of our search239 spaces corresponding to near-peak performance was small, likely due to the small step budget [Shallue240 et al., 2019]. We investigate how these effects change with a less stringent step budget in Section 4.241 3 Stronger BERT pretraining speed baselines242 You et al. [2019] developed the LAMB optimizer in the hope of speeding up training for BERT-Large243 [Bidirectional Encoder Representations from Transformers, Devlin et al., 2018]. BERT training244 consists of two phases. The “pretraining” phase has two objectives: (1) predicting masked tokens245 based on the rest of the sequence (a masked language model), and (2) predicting whether two246 given sentences follow one from another. Finally, the “fine-tuning” phase refines the model for a247 downstream task of interest. BERT pretraining takes a considerable amount of time (up to 3 days on248 16 Cloud TPU-v3 chips Jouppi et al. [2017]), whereas the fine-tuning phase is typically much faster.249 Model quality is typically assessed on the downstream metrics, not on pretraining loss, making BERT250 training a somewhat awkward benchmark for optimization research.251 You et al. [2019] used LAMB for BERT pretraining with batch sizes up to 65,536 and claimed that252 LAMB outperforms Adam batch size 16,384 and beyond. The LAMB optimizer has since appeared253 in several NLP toolkits, including as Microsoft DeepSpeed and NVIDIA Multi-node BERT training,254 and as a benchmark task in MLPerf v0.7.12255 As shown in Table 3, we trained Adam (with decoupled weight decay) baselines that achieve better256 results than both the LAMB and Adam results reported in You et al. [2019]. Our new Adam257 baselines obtain better F1 scores on the development set of the SQuaD v1.1 task in the same number258 of training steps as LAMB for both batch size 32,768 and the hybrid 65,536-then-32,768 batch259 size training regime in You et al. [2019]. We also ran Adam at batch size 65,536 to reach nearly260 the same F1 score as the hybrid batch size LAMB result, but in much fewer training steps. We261 believe 7,818 steps is a new state-of-the-art for BERT pretraining speed [in our experiments, it262 also improves upon the 76-minute record claimed in You et al., 2019]. Additionally, at batch263 size 32,768 our Adam baseline got a better pretraining loss of 1.277 compared to LAMB’s 1.342.264 12 We do not consider the MLPerf task in this paper since it is a warm-start, partial training task. 265 We used the same experimental setup as You266 et al. [2019], including two pretraining phases267 with max sequence lengths of 128 and then 512.268 In order to match You et al. [2019], we reported269 the F1 score on the downstream SQuaD v1.1270 task as the target metric, although this metric271 introduces potential confounds: optimization272 efficiency should be measured on the training273 task using training and held-out data sets. Fortunately, in this case better pretraining performance274 correlated a with higher F1 score after fine-tuning. See Appendix B.2 for additional experiment275 details. We tuned Adam hyperparameters independently for each pretraining phase, specifically276 learning rate η, β1, β2, the polynomial power for the learning rate warmup pwarmup, and weight277 decay λ, using quasi-random search [Bousquet et al., 2017]. See Appendix D.2 for the search spaces.278 In addition to hyperparmeter tuning, our improved Adam results at these batch sizes are also likely279 due to two implementation differences. First, the Adam implementation in You et al. [2019] comes280 from the BERT open source code base, in which Adam is missing the standard bias correction.13281 The Adam bias correction acts as an additional step size warm-up, thereby potentially improving the282 stability in the initial steps of training. Second, the BERT learning rate schedule had a discontinuity283 at the start of the decay phase due to the learning rate decay being incorrectly applied during warm-up284 14 (see Figure 2 in Appendix B). This peculiarity is part of the official BERT release and is present in285 3000+ copies of the BERT Training code on GitHub.286 4 Investigating a less stringent step budget287 Part of what makes comparing optimizers so difficult is that the hyperparameter tuning tends to288 dominate the comparisons [Choi et al., 2019]. Moreover, tuning becomes especially difficult when289 we demand a fixed epoch budget even when dramatically increasing the batch size [Shallue et al.,290 2019]. Fixing the epoch budget as the batch size increases is equivalent to demanding perfect scaling291 (i.e. that the number of training steps decreases by the same factor that the batch size is increased).292 We can view the role of hyperparameter tuning for large batch training as resisting the inevitable end293 of perfect scaling. For example, it might be possible to extend perfect scaling using delicately tuned294 learning rate schedules, but comparing optimizers under these conditions can make the learning rate295 schedule dominate the comparison by favoring some algorithms over others. Therefore, in order to296 better understand the behavior of LARS and LAMB compared to Nesterov Momentum and Adam, we297 ran additional ResNet-50 experiments with a more generous 6,000 step budget (vs 2,512 in Section 2)298 and a more simplistic cosine learning rate schedule. At batch size 32,768, this budget should let us299 reach better validation accuracy than the MLPerf target of 75.9%.300 Although not mentioned in You et al. [2017], the state-of-the-art MLPerf pipeline for “LARS” actually301 uses both LARS and Heavy-ball Momentum, with Momentum applied to the batch normalization and302 ResNet bias parameters and LARS applied to the other parameters. You et al. [2019] does not mention303 whether LAMB was only applied to some parameters and not others. If layerwise normalization can304 be harmful for some model parameters, this is critical information for practitioners using LARS or305 LAMB, since it might not be obvious which optimizer to apply to which parameters. To investigate306 this, we trained both pure LARS and LAMB configurations, as well as configurations that did not307 apply layerwise normalization to the batch normalization and ResNet bias parameters. Moreover,308 LAMB’s underlying Adam implementation defaults to = 10−6, rather than the typical 10−7 or309 10−8. In some cases, can be a critical hyperparameter for Adam [Choi et al., 2019], so we included310 Adam configurations with both = 10−6 and = 10−8.311 Table 4 shows the validation accuracy of these different configurations after training for 6,000312 steps with batch size 32,768. In every case, we used a simple cosine decay learning rate sched-313 ule and tuned the initial learning rate and weight decay using quasi-random search. We used314 momentum parameters of 0.98 for Nesterov momentum and 0.929 for LARS, respectively, based315 on the tuned values from Section 2. We used default hyperparameters for Adam and LAMB316 except where specified. We set all other hyperparameters to the same values as the state-of-the-317 art LARS pipeline, except we set γ0 = 1.0. See Appendix D.3 for more details. As expected,318 13 https://git.io/JtY8d 14 See https://git.io/JtnQW and https://git.io/JtnQ8. highly tuned learning rate schedules and optimizer hyperparameters are no longer necessary with319 a less stringent step budget. Multiple optimizer configurations in Table 4 exceed the MLPerf320 target accuracy of 75.9% at batch size 32,768 with minimal tuning. Training with larger batch321 sizes is not fundamentally unstable: stringent step budgets make hyperparameter tuning trickier.322 LAMB, are introduced alongside claims that337 the new optimizer does not require any—or at338 least minimal—tuning. Unfortunately, these339 claims require a lot of work to support, since340 they require trying the optimizer on new prob-341 lems without using those problems during the342 development of the algorithm. Although our ex-343 periments here are not sufficient to determine344 which optimizers are easiest to tune, experiments like these that operate outside the regime of highly345 tuned learning rate schedules can serve as a starting point. In this experiment, LARS and LAMB do346 not appear to have an advantage in how easy they are to tune even on a dataset and model that were347 used in the development of both of those algorithms. LAMB is a variant of Adam and performs about348 the same as Adam with the same value of ; LARS is more analogous to Momentum and indeed349 Nesterov momentum and LARS have similar performance.350 5 Discussion351 Our results show that standard, generic optimizers suffice for achieving strong results across batch352 sizes. Therefore, any research program to create new optimizers for training at larger batch sizes353 must start from the fact that Momentum, Adam, and likely other standard methods work fine at batch354 sizes as large as those considered in this paper. The LARS and LAMB update rules have no more355 to do with the batch size (or “large” batches) than the Momentum or Adam update rules. Although356 You et al. [2019] presented convergence rate bounds for LARS and LAMB to support their claims357 of superior performance, we show in Appendix A that Adam satisfies a similar bound to LAMB.358 These bounds all rely on very unrealistic assumptions.15 Most of all, they are loose upper bounds359 on the worst case behavior of the algorithms, not accurate reflections of optimizer performance in360 reality. Whether layer-wise normalization can be useful for optimization or regularization remains an361 open question. However, if LARS and LAMB have any advantage over standard techniques, it is not362 that they work dramatically better on the tasks and batch sizes in You et al. [2017, 2019]. This is363 not to suggest that there is nothing interesting about studying neural network optimization at larger364 batch sizes. For example, as gradient noise decreases, there may be opportunities to harness curvature365 information and extend the region of perfect scaling [Zhang et al., 2019]. However, there is currently366 no evidence that LARS and LAMB scale better than Momentum and Adam.367 Our primary concern in this paper has been matching the state of the art—and establishing new368 baselines—for training speed measurements of the sort used to justify new techniques and algorithms369 for training with larger batch sizes. In contrast, many practitioners are more concerned with obtaining370 the best possible validation error with a somewhat flexible training time budget. Part of the reason371 why matching LARS at batch size 32,768 was non-trivial is because getting state of the art training372 15 All convergence bounds assume no momentum is used, and the Lavg bound for LAMB also assumes β2 = 0, when it is typically 0.999. Additionally, Lavg could still be large if L∞ is large, but we leave an empirical analysis of this to future work. speed requires several tricks and implementation details that are not often discussed. It was not373 obvious to us a priori which ones would prove crucial. These details do not involve changes to the374 optimizer, but they interact with the optimizer in a regime where all hyperparameters need to be well375 tuned to stay competitive, making it necessary to re-tune everything for a new optimizer.376 In neural network optimization research, training loss is rarely discussed in detail and evaluation377 centers on validation/test performance since that is what practitioners care most about. However,378 although we shouldn’t only consider training loss, it is counter-intuitive and counter-productive to379 elide a careful investigation of the actual objective of the optimizer. If a new optimizer achieves better380 test performance, but shows no speedup on training loss, then perhaps it is not a better optimizer so381 much as an indirect regularizer. 16 Indeed, in our experiments we found that Nesterov momentum382 achieves noticeably better training accuracy on ResNet-50 than the LARS configuration we used,383 despite reaching roughly the same validation accuracy. Properly disentangling possible regularization384 benefits from optimization speed-ups is crucial if we are to understand neural network training,385 especially at larger batch sizes where we lose some of the regularization effect of gradient noise.386 Hypothetically, if the primary benefit of a training procedure is regularization, then it would be better387 to compare the method with other regularization baselines than other optimizers.388 Ultimately, we only care about batch size to the extent that higher degrees of data parallelism lead389 to faster training. Training with a larger batch size is a means, not the end goal. New optimizers—390 whether designed for generic batch sizes or larger batch sizes—have the potential to dramatically391 improve algorithmic efficiency across multiple workloads, but our results show that standard opti-392 mizers can match the performance of newer alternatives on the workloads we considered. Indeed,393 despite the legion of new update rule variants being proposed in the literature, standard Adam and394 Momentum remain the workhorses of practitioners and researchers alike, while independent empirical395 comparisons consistently find no clear winner when optimizers are compared across a variety of396 workloads [Schmidt et al., 2020]. Meanwhile, as Choi et al. [2019] and our results underscore,397 comparisons between optimizers crucially depend on the effort spent tuning hyperparameters for each398 optimizer. Given these facts, we should regard with extreme caution studies claiming to show the399 superiority of one particular optimizer over others. Part of the issue stems from current incentives in400 the research community; we overvalue the novelty of new methods and undervalue establishing strong401 baselines to measure progress against. This is particularly problematic in the study of optimizers,402 where the learning rate schedule is arguably more important than the choice of the optimizer update403 rule itself! As our results show, the best learning rate schedule is tightly coupled with the optimizer,404 meaning that tuning the learning rate schedule for a new optimizer will generally favor the new405 optimizer over a baseline unless the schedule of the baseline is afforded the same tuning effort.406 6 Conclusion407 In this work, we demonstrated that standard optimizers, without any layer-wise normalization408 techniques, can match or exceed the large batch size results used to justify LARS and LAMB. Future409 work attempting to argue that a new algorithm is useful by comparing to baseline methods or results,410 including those established in this paper, faces a key challenge in showing that the gains are due to the411 new method and not merely due to better tuning or changes to the training pipeline (e.g. regularization412 tricks). Although gains from tuning will eventually saturate, we can, in principle, always invest more413 effort in tuning and potentially get better results for any optimizer. However, our goal should be414 developing optimizers that work better across many different workloads when taking into account the415 amount of additional tuning they require.416 Moving forward, if we are to reliably make progress we need to rethink how we compare and evaluate417 new optimizers for neural network training. Given how sensitive optimizer performance is to the418 hyperparameter tuning protocol and how difficult it is to quantify hyperparameter tuning effort, we419 can’t expect experiments with self-reported baselines to always lead to fair comparisons. Ideally, new420 training methods would be evaluated in a standardized competitive benchmark, where submitters of421 new optimizers do not have full knowledge of the evaluation workloads. Some efforts in this direction422 have started, for instance the MLCommons Algorithmic Efficiency Working Group17, but more work423 needs to be done to produce incentives for the community to publish well-tuned baselines and to424 reward researchers that conduct the most rigorous empirical comparisons.425 16 Deep learning folk wisdom is that “any method to make training less effective can serve as a regularizer,” whether it is a bug in gradients or a clever algorithm. 17 https://mlcommons.org/en/groups/research-algorithms/ Checklist426 1. For all authors...427 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s428 contributions and scope? [Yes] See Sections 2, 3, 4429 (b) Did you describe the limitations of your work? [Yes] We had a lengthy discussion of430 the limitations and scope of the work in Section 5431 (c) Did you discuss any potential negative societal impacts of your work? [No] We did432 not discuss this in the main text. Our primary contribution is to improve experimental433 protocols for other methodological work, which is so removed from specific machine434 learning applications that it is hard to determine the net impact. That said, more435 effective experimental protocols should lead to more effective science which in turn436 should lead to more effective machine learning applications. Whether this development437 is positive or negative for society will depend on who stands to gain from the use of438 machine learning in future applied contexts. Additionally, although our work should, in439 the long run, save computational resources for individual researchers, in net across the440 community this may or may not produce an aggregate savings because more efficient441 machine learning training, by making larger scale projects more accessible, can lead442 to an increased demand for compute resources [York, 2006], which can have varying443 degrees of negative environmental impacts [Patterson et al., 2021].444 (d) Have you read the ethics review guidelines and ensured that your paper conforms to445 them? [Yes]446 2. If you are including theoretical results...447 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Appendix A448 for a comprehensive description of the problem setting.449 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix A.450 3. If you ran experiments...451 (a) Did you include the code, data, and instructions needed to reproduce the main experi-452 mental results (either in the supplemental material or as a URL)? [No] We will include453 a link to all code and all possible reproducibility instructions after the anonymized454 reviewing period is over.455 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they456 were chosen)? [Yes] We are extremely detailed about our tuning procedures and dataset457 details, see Appendices B, D.458 (c) Did you report error bars (e.g., with respect to the random seed after running experi-459 ments multiple times)? [Yes] While we do not report error bars in the tables in the main460 text, Appendices B.2, C contains box plots showing the quartiles of the distribution461 over random seeds.462 (d) Did you include the total amount of compute and the type of resources used (e.g., type463 of GPUs, internal cluster, or cloud provider)? [No] In Appendix B we state that we464 run on Google TPUs, however we do not tally up the total number of experiments run465 (although an interested reader could compute it from the information we provided in466 our detailed appendices given that we list all intermediate experiments, no matter how467 silly in hindsight).468 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...469 (a) If your work uses existing assets, did you cite the creators? [Yes] We reference the470 relevant citations for all models, datasets, and techniques.471 (b) Did you mention the license of the assets? [No]472 (c) Did you include any new assets either in the supplemental material or as a URL? [No]473 (d) Did you discuss whether and how consent was obtained from people whose data you’re474 using/curating? [N/A]475 (e) Did you discuss whether the data you are using/curating contains personally identifiable476 information or offensive content? [N/A]477 5. If you used crowdsourcing or conducted research with human subjects...478 (a) Did you include the full text of instructions given to participants and screenshots, if479 applicable? [N/A]480 (b) Did you describe any potential participant risks, with links to Institutional Review481 Board (IRB) approvals, if applicable? [N/A]482 (c) Did you include the estimated hourly wage paid to participants and the total amount483 spent on participant compensation? [N/A]484 References485 Martı́n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.486 Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew487 Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath488 Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,489 Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent490 Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg,491 Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on492 heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from493 tensorflow.org.494 Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, and Cyril Zhang. Disentangling adaptive495 gradient methods from learning rates. arXiv preprint arXiv:2002.11803, 2020.496 Olivier Bousquet, Sylvain Gelly, Karol Kurach, Olivier Teytaud, and Damien Vincent. Critical hyper-497 parameters: No random, no cry. arXiv, 2017. URL https://arxiv.org/abs/1706.03200.498 James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal499 Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and500 Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL501 http://github.com/google/jax.502 Dami Choi, Christopher J Shallue, Zachary Nado, Jaehoon Lee, Chris J Maddison, and George E503 Dahl. On empirical comparisons of optimizers for deep learning. arXiv preprint arXiv:1910.05446,504 2019.505 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep506 bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.507 Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the gen-508 eralization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741,509 2017.510 Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by511 reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.512 Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa,513 Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of514 a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer515 Architecture, pages 1–12, 2017.516 Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint517 arXiv:1412.6980, 2014.518 Sameer Kumar, Victor Bitorff, Dehao Chen, Chiachen Chou, Blake Hechtman, HyoukJoong Lee,519 Naveen Kumar, Peter Mattson, Shibo Wang, Tao Wang, et al. Scale mlperf-0.6 models on google520 tpu-v3 pods. arXiv preprint arXiv:1909.09756, 2019.521 Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson,522 Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debojy-523 oti Dutta, Udit Gupta, Kim Hazelwood, Andrew Hock, Xinyuan Huang, Atsushi Ike, Bill Jia,524 Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Guokai Ma, Deepak Narayanan, Tayo525 Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St.526 John, Tsuguchika Tabaru, Carole-Jean Wu, Lingjie Xu, Masafumi Yamazaki, Cliff Young, and527 Matei Zaharia. MLPerf training benchmark. arXiv preprint arXiv:1910.01500, 2019. URL528 https://arxiv.org/abs/1910.01500.529 Yurii E Nesterov. A method for solving the convex programming problem with convergence rate530 O(1/kˆ2). In Dokl. akad. nauk Sssr, volume 269, pages 543–547, 1983.531 David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild,532 David So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training. arXiv533 preprint arXiv:2104.10350, 2021.534 Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR535 Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.536 Robin M Schmidt, Frank Schneider, and Philipp Hennig. Descending through a crowded valley–537 benchmarking deep learning optimizers. arXiv preprint arXiv:2007.01547, 2020.538 Christopher J Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and539 George E Dahl. Measuring the effects of data parallelism on neural network training. Journal of540 Machine Learning Research, 20(112):1–49, 2019.541 Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization542 and momentum in deep learning. In ICML, 2013.543 Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking544 the inception architecture for computer vision. In Proceedings of the IEEE conference on computer545 vision and pattern recognition, pages 2818–2826, 2016.546 Yu Emma Wang, Gu-Yeon Wei, and David Brooks. Benchmarking tpu, gpu, and cpu platforms for547 deep learning. arXiv preprint arXiv:1907.10701, 2019.548 Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in549 stochastic meta-optimization. arXiv preprint arXiv:1803.02021, 2018.550 Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, and Youlong Cheng. Image classification at551 supercomputer scale. arXiv preprint arXiv:1811.06992, 2018.552 Richard York. Ecological paradoxes: William stanley jevons and the paperless office. Human Ecology553 Review, pages 143–147, 2006.554 Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv555 preprint arXiv:1708.03888, 2017.556 Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan557 Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep558 learning: Training bert in 76 minutes. In International Conference on Learning Representations,559 2019.560 Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George Dahl, Chris561 Shallue, and Roger B Grosse. Which algorithmic choices matter at which batch sizes? insights562 from a noisy quadratic model. In Advances in Neural Information Processing Systems, pages563 8196–8207, 2019.564 Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and565 Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching566 movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer567 Vision (ICCV), ICCV ’15, page 19–27, USA, 2015. IEEE Computer Society. ISBN 9781467383912.568 doi: 10.1109/ICCV.2015.11. URL https://doi.org/10.1109/ICCV.2015.11.569
1. What is the main contribution of the paper regarding stochastic optimizers for large-batch training? 2. What are the strengths and weaknesses of the proposed approach compared to prior works like Adam and LAMB/LARS? 3. How does the reviewer assess the novelty and significance of the paper's content? 4. Are there any concerns or questions regarding the paper's methodology, experimental design, or conclusions? 5. How does the reviewer evaluate the clarity, quality, and reproducibility of the paper's writing and results?
Summary Of The Paper Review
Summary Of The Paper The paper performs a detailed hyperparameter study of stochastic optimizers specifically designed for large-batch training (LARS and LAMB). With appropriate hyperparameter tuning, simpler optimizers such as Nesterov SGD and Adam match and sometimes outperform those specialized optimizers on ResNet-50 and BERT. The authors attribute the relative success of large-batch methods to minute undocumented aspects in their implementations, such as regularization and scaling on certain operators, which can be back-ported to the other optimizers, and advocate for more relaxed definitions (e.g. number of steps) for benchmarks such as MLPerf Training. Review The paper neatly showcases the feebleness of highly-tuned stochastic optimization schedules on two standard deep learning benchmarks. While the points of the paper are mention-worthy, there is nearly no scientific contribution in the paper. Even as an empirical study, it hard to draw tangible conclusions from the paper. The only takeaway from the paper is that LAMB/LARS are very well-tuned, and simpler optimizers' hyperparameter schedules could also be aggressively tuned to achieve the same result. The paper reads like a direct attack on LAMB and LARS, though the same claims could be made on other optimizers, including Adam and its hyperparameters (with enough tuning, one could probably attain similar convergence properties to Adam with momentum SGD). Similar claims about benchmarking stochastic optimizers w.r.t. tunability have already been made, e.g. by Sivaprasad et al. in "Optimizer Benchmarking Needs to Account for Hyperparameter Tuning", which is not cited here. In the paper, there is very limited (sometimes absent) theoretical/empirical reasoning for the importance of certain parameter values: for example, in line 169: "we found that choosing γ 0 between 0.0 and 1.0 was important" --- why is that? What happens to the training process otherwise? Additionally, would the tuned Nesterov parameters for ResNet-50 transfer to other neural architectures (such as ViT), or other datasets? As a reader, I would be interested in a sensitivity analysis of the hyperparameters (or of the same algorithm to varying neural architectures/datasets), theoretical justification, or some sort of guidelines on how to tune those parameters in reasonable time (i.e. without exploring the entire space). In-depth comments: Since most of the paper is spent discussing LAMB and LARS, I find it strange that their update rule definitions cannot be found in the manuscript. Line 322: "Training with larger batch sizes is not fundamentally unstable: stringent step budgets make hyperparameter tuning trickier." --- Even with an unlimited step budget, it was shown on multiple occasions (with theoretical justification) that larger batch sizes tend to overfit. What about non-Nesterov momentum SGD? is there a configuration that works similarly well? If not, why not? Line 164: moving averages not explained well in the context of batch normalization. Throughout: the paper uses too many footnotes to my taste, which detracts from the flow of the text.
NIPS
Title A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes Abstract Recently the LARS and LAMB optimizers have been proposed for training neural 1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and 3 have become popular in prominent benchmarks and deep learning libraries. How4 ever, without fair comparisons to standard optimizers, it remains an open question 5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In 6 this work we demonstrate that standard optimization algorithms such as Nesterov 7 momentum and Adam can match or exceed the results of LARS and LAMB at large 8 batch sizes. Our results establish new, stronger baselines for future comparisons 9 at these batch sizes and shed light on the difficulties of comparing optimizers for 10 neural network training more generally. 11 N/A Recently the LARS and LAMB optimizers have been proposed for training neural1 networks faster using large batch sizes. LARS and LAMB add layer-wise normal-2 ization to the update rules of Heavy-ball momentum and Adam, respectively, and3 have become popular in prominent benchmarks and deep learning libraries. How-4 ever, without fair comparisons to standard optimizers, it remains an open question5 whether LARS and LAMB have any benefit over traditional, generic algorithms. In6 this work we demonstrate that standard optimization algorithms such as Nesterov7 momentum and Adam can match or exceed the results of LARS and LAMB at large8 batch sizes. Our results establish new, stronger baselines for future comparisons9 at these batch sizes and shed light on the difficulties of comparing optimizers for10 neural network training more generally.11 1 Introduction12 In recent years, hardware systems employing GPUs and TPUs have enabled neural network training13 programs to process dramatically more data in parallel than ever before. The most popular way to14 exploit these systems is to increase the batch size in the optimization algorithm (i.e. the number15 of training examples processed per training step). On many workloads, modern systems can scale16 to larger batch sizes without significantly increasing the time per step [Jouppi et al., 2017, Wang17 et al., 2019], thus proportionally increasing the number of training examples processed per second.18 If researchers can use this increased throughput to reduce the time required to train each neural19 network, then they should achieve better results by training larger models, using larger datasets, and20 by exploring new ideas more rapidly.21 As the capacity for data parallelism continues to increase, practitioners can take their existing,22 well-tuned training configurations and re-train with larger batch sizes, hoping to achieve the same23 performance in less training time [e.g. Ying et al., 2018]. On an idealized data-parallel system with24 negligible overhead from increasing the batch size, they might hope to achieve perfect scaling, a25 proportional reduction in training time as the batch size increases.26 However, achieving perfect scaling is not always straightforward. Changing the batch size changes27 the training dynamics, requiring the training hyperparameters (e.g. learning rate) to be carefully28 re-tuned in order to maintain the same level of validation performance.1 In addition, smaller batch29 sizes provide implicit regularization from gradient noise that may need to be replaced by other forms30 of regularization when the batch size is increased. Finally, even with perfect tuning, increasing31 1 Although there are heuristics for adjusting the learning rate as the batch size changes, these heuristics inevitably break down sufficiently far from the initial batch size and it is also not clear how to apply them to other training hyperparameters (e.g. momentum). Submitted to 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Do not distribute. the batch size eventually produces diminishing returns. After a critical batch size, the number of32 training steps cannot be decreased in proportion to the batch size – the number of epochs must33 increase to match the validation performance of the smaller batch size. See Shallue et al. 2019 for a34 survey of the effects of data parallelism on neural network training. Once these effects are taken into35 account, there is no strong evidence that increasing the batch size degrades the maximum achievable36 performance on any workload. At the same time, the ever-increasing capacity for data parallelism37 presents opportunities for new regularization techniques that can replace the gradient noise of smaller38 batch sizes and new optimization algorithms that can extend perfect scaling to larger batch sizes by39 using more sophisticated gradient information [Zhang et al., 2019].40 You et al. [2017] proposed the LARS optimization algorithm in the hope of speeding up neural41 network training by exploiting larger batch sizes. LARS is a variant of stochastic gradient descent42 (SGD) with momentum [Polyak, 1964] that applies layer-wise normalization before applying each43 gradient update. Although it is difficult to draw strong conclusions from the results presented in the44 LARS paper, 2 the MLPerf3 Training benchmark4 adopted LARS as one of two allowed algorithms45 in the closed division for ResNet-50 on ImageNet and it became the de facto standard algorithm for46 that benchmark task. With MLPerf entrants competing to find the fastest-training hyperparameters47 for LARS, the first place submissions in the two most recent MLPerf Training competitions used48 LARS to achieve record training speeds with batch sizes of 32,678 and 65,536, respectively. No49 publications or competitive submissions to MLPerf have attempted to match these results with a50 standard optimizer (e.g. Momentum or Adam). However, MLPerf entrants do not have a strong51 incentive (nor are necessarily permitted by the rules) to explore other algorithms because MLPerf52 Training is a systems benchmark that requires algorithmic equivalence between submissions to make53 fair comparisons. Moreover, since the main justification for LARS is its excellent performance on54 ResNet-50 at large batch sizes, more work is needed to quantify any benefit of LARS over standard55 algorithms at any batch size.56 You et al. [2019] later proposed the LAMB optimizer to speed up pre-training for BERT [Devlin57 et al., 2018] using larger batch sizes after concluding that LARS was not effective across workloads.58 LAMB is a variant of Adam [Kingma and Ba, 2014] that adds a similar layer-wise normalization step59 to LARS. You et al. [2019] used LAMB for BERT pre-training with batch sizes up to 65,536 and60 claimed that Adam cannot match the performance of LAMB beyond batch size 16,384.61 In this paper, we demonstrate that standard optimizers, without any layer-wise normalization tech-62 niques, can match or improve upon the large batch size results used to justify LARS and LAMB. In63 Section 2, we show that Nesterov momentum [Nesterov, 1983] matches the performance of LARS on64 the ResNet-50 benchmark with batch size 32,768. We are the first to match this result with a standard65 optimizer. In Section 3, contradicting the claims in You et al. [2019], we show that Adam obtains66 better BERT pre-training results than LAMB at the largest batch sizes, resulting in better downstream67 performance metrics after fine-tuning.68 In addition, we establish a new state-of-the-art for BERT pretraining speed, reaching an F1 score of69 90.46 in 7,818 steps using Adam at batch size 65,536 (we report training speed in steps because our70 focus is algorithmic efficiency, but since we compare LARS and LAMB to simpler optimizers, fewer71 training steps corresponds to faster wall-time in an optimized implementation – our BERT result72 with Adam also improves upon the wall-time record of LAMB reported in You et al. 2019). Taken73 together, our results establish stronger training speed baselines for these tasks and batch sizes, which74 we hope will assist future work aiming to accelerate training using larger batch sizes.75 In addition to the contributions mentioned above, we demonstrate several key effects that are often76 overlooked by studies aiming to establish the superiority of new optimization algorithms. We show77 that future work must carefully disentangle regularization and optimization effects when comparing a78 new optimizer to baselines. We also report several under-documented details used to generate the79 best LARS and LAMB results, a reminder that future comparisons should document any novel tricks80 and include them in baselines. Finally, our results add to existing evidence in the literature on the81 difficulty of performing independently rigorous hyperparameter tuning for optimizers and baselines.82 2 The modified AlexNet on ImageNet benchmark did not have well-established accuracy targets from prior work and LARS used a more general learning rate schedule than the momentum baseline. For ResNet-50 on ImageNet, LARS achieved sub-par accuracy numbers and was not compared to any other optimizer at the same batch size, leaving open the possibility that a generic optimizer would scale just as well as LARS. 3 MLPerf is a trademark of MLCommons.org. 4 https://mlperf.org/training-overview In particular, we show that the optimal shape of the learning rate schedule is optimizer-dependent (in83 addition to the scale), and that differences in the schedule can dominate optimizer comparisons at84 smaller step budgets and become less important at larger step budgets.85 1.1 Related work86 Shallue et al. [2019] and Zhang et al. [2019] explored the effects of data parallelism on neural network87 training for different optimizers, finding no evidence that larger batch sizes degrade performance88 and demonstrating that different optimizers can achieve perfect scaling up to different critical batch89 sizes. You et al. [2017, 2019] developed the LARS and LAMB optimizers in the hope of speeding up90 training by achieving perfect scaling beyond standard optimizers. Many other recent papers have91 proposed new optimization algorithms for generic batch sizes or larger batch sizes [see Schmidt92 et al., 2020]. Choi et al. [2019] and Schmidt et al. [2020] demonstrated the difficulties with fairly93 comparing optimizers, showing that the hyperparameter tuning protocol is a key determinant of94 optimizer rankings. The MLPerf Training benchmark [Mattson et al., 2019] provides a competitive95 ranking of neural network training systems, but does not shed much light on the relative performance96 of optimizers because entrants are limited in the algorithms they can use and the hyperparameters97 they can tune.98 2 Matching LARS on ImageNet99 The MLPerf training benchmark for ResNet-50 v1.5 on ImageNet [Mattson et al., 2019] aims to100 reach 75.9% validation accuracy in the shortest possible wall-clock time. In the closed division of101 the competition, entrants must choose between two optimizers, SGD with momentum or LARS, and102 are only allowed to tune a specified subset of the optimization hyperparameters, with the remaining103 hyperparameter values set by the competition rules.5 The winning entries in the two most recent104 competitions used LARS with batch size 32,768 for 72 training epochs6 and LARS with batch size105 65,536 for 88 training epochs,7 respectively. Kumar et al. [2019] later improved the training time106 for batch size 32,768 by reaching the target accuracy in 64 epochs. These are currently the fastest107 published results on the ResNet-50 benchmark. However, it has been unclear whether LARS was108 necessary to achieve these training speeds since no recent published results or competitive MLPerf109 submissions have used another optimizer. In this section, we describe how we matched the 64 epoch,110 32,768 batch size result of LARS using standard Nesterov momentum.8111 A fair benchmark of training algorithms or hardware systems must account for stochasticity in112 individual training runs. In the MLPerf competition, the benchmark metric is the mean wall-clock113 time of 5 trials after the fastest and slowest trials are excluded. Only 4 out of the 5 trials need to reach114 the target accuracy and there is no explicit limit on the number of times an entrant can try a different115 set of 5 trials. Since our goal is to compare algorithms, rather than systems, we aim to match the116 LARS result in terms of training steps instead (but since Nesterov momentum is computationally117 simpler than LARS, this would also correspond to faster wall-clock time on an optimized system).118 Specifically, we measure the median validation accuracy over 50 training runs with a fixed budget of119 2,512 training steps9 at a batch size of 32,768. When we ran the published LARS training pipeline,10120 LARS achieved a median accuracy of 75.97% and reached the target in 35 out of 50 trials. We121 consider the LARS result to be matched by another optimizer if the median over 50 trials exceeds the122 target of 75.9%.123 2.1 Nesterov momentum at batch size 32k124 This section describes how we used the standard Nesterov momentum optimizer to train the ResNet-125 50 v1.5 on ImageNet to 75.9% validation accuracy in 2,512 update steps at a batch size of 32,768,126 matching the best published LARS result at this batch size. Although we implemented our own127 training program, the only logical changes we made to the published LARS pipeline were to the128 optimizer and the optimization hyperparameters. Our model implementation and data pre-processing129 pipeline were identical to those required under the MLPerf closed division rules (see Appendix B).130 5 https://git.io/JtknD 6 https://mlperf.org/training-results-0-6 7 https://mlperf.org/training-results-0-7 8 The 88 epoch, 65,536 batch size result is faster in terms of wall-clock time but requires more training epochs, indicating that it is beyond LARS’s perfect scaling regime. Although LARS obtains diminishing returns when increasing the batch size from 32,768 to 65,536, future work could investigate whether Nesterov momentum drops off more or less rapidly than LARS. 9 Corresponding to 64 training epochs in Kumar et al. [2019]. 10 https://git.io/JtsLQ We present two Nesterov momentum hyperparameter configurations that achieve comparable per-131 formance to LARS. Configuration A achieved a median accuracy of 75.97% (the same as LARS)132 and reached the target accuracy in 34 out of 50 trials. Configuration B is a modified version of133 Configuration A designed to make as few changes as possible to the LARS hyperparameters; it134 achieved a median accuracy of 75.92% and reached the target in 29 out of 50 trials. See Appendix D.1135 for the complete hyperparameter configurations.136 To achieve these results, we tuned the hyperparameters of the training pipeline from scratch using137 Nesterov momentum. We ran a series of experiments, each of which searched over a hand-designed138 hyperparameter search space using quasi-random search [Bousquet et al., 2017]. Between each139 experiment, we modified the previous search space and/or tweaked the training program to include140 optimization tricks and non-default hyperparameter values we discovered in the state-of-the-art LARS141 pipeline. The full sequence of experiments we ran, including the number of trials, hyperparameters142 tuned, and search space ranges, are provided in Appendix D.4. Once we had matched the LARS143 result with Configuration A, we tried setting each hyperparameter to its value in the LARS pipeline in144 order to find the minimal set of changes that still achieved the target result, producing Configuration145 B. The remainder of this section describes the hyperparameters we tuned and the techniques we146 applied on the journey to these results.147 2.1.1 Nesterov Momentum Optimizer148 Nesterov momentum is a variant of classical or “heavy-ball” momentum defined by the update rule149 vt+1 = µvt +∇`(θt), θt+1 = θt − ηt (µvt+1 +∇`(θt)) , where v0 = 0, θt is the vector of model parameters after t steps, ∇`(θt) is the gradient of the loss150 function `(θ) averaged over a batch of training examples, µ is the momentum, and ηt is the learning151 rate for step t. We prefer Nesterov momentum over classical momentum because it tolerates larger152 values of its momentum parameter [Sutskever et al., 2013] and sometimes outperforms classical153 momentum, although the two algorithms perform similarly on many tasks [Shallue et al., 2019, Choi154 et al., 2019]. We tuned the Nesterov momentum µ in Configurations A and B. We discuss the learning155 rate schedule {ηt} separately in Section 2.1.4.156 2.1.2 Batch normalization157 The ResNet-50 v1.5 model uses batch normalization [Ioffe and Szegedy, 2015], defined as158 BN(x(l)) = ( x(l) − mean(x(l))√ var(x(l)) + ) × γ(l) + β(l), where x(l) is a vector of pre-normalization outputs from layer l, mean(·) and var(·) denote the159 element-wise sample mean and variance across the batch of training examples,11 and γ(l) and β(l)160 are trainable model parameters.161 Batch normalization introduces the following tuneable hyperparameters: , the small constant added162 to the sample variance; the initial values of γ(l) and β(l); and ρ, which governs the exponential163 moving averages of the scaling factors used in evaluation. The LARS pipeline uses = 10−5 and164 ρ = 0.9. It sets the initial value of β(l) to 0.0 everywhere, but the initial value of γ(l) depends on165 the layer: it sets γ(l) to 0.0 in the final batch normalization layer of each residual block, and to 1.0166 everywhere else. In Configuration A, we tuned , ρ, and γ0, the initial value of γ(l) in the final batch167 normalization layer of each residual block. In Configuration B, we used the same values as LARS for168 and ρ, but we found that choosing γ0 between 0.0 and 1.0 was important for matching the LARS169 result with Nesterov momentum.170 2.1.3 Regularization171 In Configuration A, we tuned both the L2 regularization coefficient λ and label smoothing172 coefficient τ [Szegedy et al., 2016]. The LARS pipeline uses λ = 10−4 and τ = 0.1.173 11 In a distributed training environment the mean and variance are commonly computed over a subset of the full batch. The LARS pipeline uses a “virtual batch size” of 64, which we also use to avoid changing the training objective [Hoffer et al., 2017]. Crucially, the LARS pipeline does not apply L2 regularization to the bias variables of the174 ResNet model nor the batch normalization parameters γ(l) and β(l) (indeed, the published175 LARS pipeline does not even apply LARS to these parameters – it uses Heavy-ball momen-176 tum). This detail is extremely important for both LARS and Nesterov momentum to achieve177 the fastest training speed. Configuration B used the same λ and τ as Configuration A.178 179 2.1.4 Learning rate schedule180 The LARS pipeline uses a piecewise polynomial schedule181 ηt = ηinit + (ηpeak − ηinit) ( t twarmup )pwarmup , t ≤ twarmup ηfinal + (ηpeak − ηfinal) ( T−t T−twarmup )pdecay t > twarmup, with ηinit = 0.0, ηpeak = 29.0, ηfinal = 10−4, pwarmup = 1,182 pdecay = 2, and twarmup = 706 steps. In Configuration A, we re-183 tuned all of these hyperparameters with Nesterov momentum.184 In Configuration B, we set ηinit, pdecay, and twarmup to the same185 values as LARS, changing only pwarmup from 1 to 2 and re-186 scaling ηpeak and ηfinal.187 2.1.5 Comparing Nesterov momentum and LARS188 Table 1 shows the hyperparameter values for Configuration B that differ from the state-189 of-the-art LARS pipeline. Aside from re-tuning the momentum, learning rate scale, and190 regularization hyperparameters (whose optimal values are all expected to change with the191 optimizer), the only changes are setting pwarmup to 2 instead of 1 and re-tuning γ0.192 193 Figure 1 shows the LARS learning rate schedule com-194 pared to the Nesterov momentum schedule. Even though195 these schedules are similar, we found that each optimizer196 had a different optimal value of the warmup polynomial197 power. As Table 2 shows, Nesterov momentum performs198 better with pwarmup = 2 instead of 1, while the opposite199 is true with LARS. As discussed in Agarwal et al. [2020],200 optimizers can induce implicit step size schedules that201 strongly influence their training dynamics and solution202 quality, and it appears from Table 2 that the implicit step203 sizes of Nesterov momentum and LARS may evolve dif-204 ferently, causing the shapes of their optimal learning rate205 schedules to differ.206 Although the main concern of a practitioner is validation performance, the primary task of an207 optimization algorithm is to minimize training loss. Table 2 shows that Nesterov momentum achieves208 higher training accuracy than LARS, despite similar validation performance. Thus, it may be more209 appropriate to consider the layerwise normalization of LARS to be a regularization technique, rather210 than an optimization technique.211 Spending even more effort tuning LARS or Nesterov momentum would likely further improve the212 current state-of-the-art for that optimizer. Meaningful optimizer comparisons are only possible213 with independent and equally intensive tuning efforts, and we do not claim that either optimizer214 outperforms the other on this benchmark. That said, if the main evidence for LARS’s utility as a215 “large-batch optimizer” is its performance on this particular benchmark, then more evidence is needed216 to quantify any benefit it has over traditional, generic optimizers like Nesterov momentum.217 2.2 Lessons learned218 In hindsight, it was only necessary to make a few changes to the LARS pipeline to match its219 performance at batch size 32,768 with Nesterov momentum. However, Table 1 does not accurately220 represent the effort required when attempting to match a highly tuned training-speed benchmark.221 Firstly, as described in Sections 2.1.2 and 2.1.3, the strong results of LARS depend partly on a few222 subtle optimization tricks and non-default values of uncommonly-tuned hyperparameters. Fortunately,223 in this case we could discover these tricks by examining the open-source code required for MLPerf224 submissions, but machine learning research papers do not always report these important details.225 Researchers can easily waste a lot of experiments and produce misleading results before getting all of226 these details right. We demonstrate the importance of adding these tricks to our Nesterov momentum227 pipeline in Appendix C; without these tricks (or some new tricks), we likely would not have been228 able to match the LARS performance.229 Secondly, the learning rate schedule really matters when trying to maximize performance with a230 relatively small step budget. Both LARS and Nesterov momentum are sensitive to small deviations231 from the optimized learning rate schedules in Figure 1, and neither schedule works as well for the232 other optimizer. Although relatively minor changes were sufficient to match LARS with Nesterov233 momentum, there is no way to know a priori how the optimal schedule will look for a new optimizer234 Wu et al. [2018]. Even in toy settings where the optimal learning rate schedule can be derived, it235 does not fit into commonly used schedule families and depends strongly on the optimizer Zhang236 et al. [2019]. Indeed, this problem applies to the other optimization hyperparameters as well: it237 is extremely difficult to know which are worth considering ahead of time. Finally, even when we238 narrowed down our hyperparemeter search spaces around the optimal point, the volume of our search239 spaces corresponding to near-peak performance was small, likely due to the small step budget [Shallue240 et al., 2019]. We investigate how these effects change with a less stringent step budget in Section 4.241 3 Stronger BERT pretraining speed baselines242 You et al. [2019] developed the LAMB optimizer in the hope of speeding up training for BERT-Large243 [Bidirectional Encoder Representations from Transformers, Devlin et al., 2018]. BERT training244 consists of two phases. The “pretraining” phase has two objectives: (1) predicting masked tokens245 based on the rest of the sequence (a masked language model), and (2) predicting whether two246 given sentences follow one from another. Finally, the “fine-tuning” phase refines the model for a247 downstream task of interest. BERT pretraining takes a considerable amount of time (up to 3 days on248 16 Cloud TPU-v3 chips Jouppi et al. [2017]), whereas the fine-tuning phase is typically much faster.249 Model quality is typically assessed on the downstream metrics, not on pretraining loss, making BERT250 training a somewhat awkward benchmark for optimization research.251 You et al. [2019] used LAMB for BERT pretraining with batch sizes up to 65,536 and claimed that252 LAMB outperforms Adam batch size 16,384 and beyond. The LAMB optimizer has since appeared253 in several NLP toolkits, including as Microsoft DeepSpeed and NVIDIA Multi-node BERT training,254 and as a benchmark task in MLPerf v0.7.12255 As shown in Table 3, we trained Adam (with decoupled weight decay) baselines that achieve better256 results than both the LAMB and Adam results reported in You et al. [2019]. Our new Adam257 baselines obtain better F1 scores on the development set of the SQuaD v1.1 task in the same number258 of training steps as LAMB for both batch size 32,768 and the hybrid 65,536-then-32,768 batch259 size training regime in You et al. [2019]. We also ran Adam at batch size 65,536 to reach nearly260 the same F1 score as the hybrid batch size LAMB result, but in much fewer training steps. We261 believe 7,818 steps is a new state-of-the-art for BERT pretraining speed [in our experiments, it262 also improves upon the 76-minute record claimed in You et al., 2019]. Additionally, at batch263 size 32,768 our Adam baseline got a better pretraining loss of 1.277 compared to LAMB’s 1.342.264 12 We do not consider the MLPerf task in this paper since it is a warm-start, partial training task. 265 We used the same experimental setup as You266 et al. [2019], including two pretraining phases267 with max sequence lengths of 128 and then 512.268 In order to match You et al. [2019], we reported269 the F1 score on the downstream SQuaD v1.1270 task as the target metric, although this metric271 introduces potential confounds: optimization272 efficiency should be measured on the training273 task using training and held-out data sets. Fortunately, in this case better pretraining performance274 correlated a with higher F1 score after fine-tuning. See Appendix B.2 for additional experiment275 details. We tuned Adam hyperparameters independently for each pretraining phase, specifically276 learning rate η, β1, β2, the polynomial power for the learning rate warmup pwarmup, and weight277 decay λ, using quasi-random search [Bousquet et al., 2017]. See Appendix D.2 for the search spaces.278 In addition to hyperparmeter tuning, our improved Adam results at these batch sizes are also likely279 due to two implementation differences. First, the Adam implementation in You et al. [2019] comes280 from the BERT open source code base, in which Adam is missing the standard bias correction.13281 The Adam bias correction acts as an additional step size warm-up, thereby potentially improving the282 stability in the initial steps of training. Second, the BERT learning rate schedule had a discontinuity283 at the start of the decay phase due to the learning rate decay being incorrectly applied during warm-up284 14 (see Figure 2 in Appendix B). This peculiarity is part of the official BERT release and is present in285 3000+ copies of the BERT Training code on GitHub.286 4 Investigating a less stringent step budget287 Part of what makes comparing optimizers so difficult is that the hyperparameter tuning tends to288 dominate the comparisons [Choi et al., 2019]. Moreover, tuning becomes especially difficult when289 we demand a fixed epoch budget even when dramatically increasing the batch size [Shallue et al.,290 2019]. Fixing the epoch budget as the batch size increases is equivalent to demanding perfect scaling291 (i.e. that the number of training steps decreases by the same factor that the batch size is increased).292 We can view the role of hyperparameter tuning for large batch training as resisting the inevitable end293 of perfect scaling. For example, it might be possible to extend perfect scaling using delicately tuned294 learning rate schedules, but comparing optimizers under these conditions can make the learning rate295 schedule dominate the comparison by favoring some algorithms over others. Therefore, in order to296 better understand the behavior of LARS and LAMB compared to Nesterov Momentum and Adam, we297 ran additional ResNet-50 experiments with a more generous 6,000 step budget (vs 2,512 in Section 2)298 and a more simplistic cosine learning rate schedule. At batch size 32,768, this budget should let us299 reach better validation accuracy than the MLPerf target of 75.9%.300 Although not mentioned in You et al. [2017], the state-of-the-art MLPerf pipeline for “LARS” actually301 uses both LARS and Heavy-ball Momentum, with Momentum applied to the batch normalization and302 ResNet bias parameters and LARS applied to the other parameters. You et al. [2019] does not mention303 whether LAMB was only applied to some parameters and not others. If layerwise normalization can304 be harmful for some model parameters, this is critical information for practitioners using LARS or305 LAMB, since it might not be obvious which optimizer to apply to which parameters. To investigate306 this, we trained both pure LARS and LAMB configurations, as well as configurations that did not307 apply layerwise normalization to the batch normalization and ResNet bias parameters. Moreover,308 LAMB’s underlying Adam implementation defaults to = 10−6, rather than the typical 10−7 or309 10−8. In some cases, can be a critical hyperparameter for Adam [Choi et al., 2019], so we included310 Adam configurations with both = 10−6 and = 10−8.311 Table 4 shows the validation accuracy of these different configurations after training for 6,000312 steps with batch size 32,768. In every case, we used a simple cosine decay learning rate sched-313 ule and tuned the initial learning rate and weight decay using quasi-random search. We used314 momentum parameters of 0.98 for Nesterov momentum and 0.929 for LARS, respectively, based315 on the tuned values from Section 2. We used default hyperparameters for Adam and LAMB316 except where specified. We set all other hyperparameters to the same values as the state-of-the-317 art LARS pipeline, except we set γ0 = 1.0. See Appendix D.3 for more details. As expected,318 13 https://git.io/JtY8d 14 See https://git.io/JtnQW and https://git.io/JtnQ8. highly tuned learning rate schedules and optimizer hyperparameters are no longer necessary with319 a less stringent step budget. Multiple optimizer configurations in Table 4 exceed the MLPerf320 target accuracy of 75.9% at batch size 32,768 with minimal tuning. Training with larger batch321 sizes is not fundamentally unstable: stringent step budgets make hyperparameter tuning trickier.322 LAMB, are introduced alongside claims that337 the new optimizer does not require any—or at338 least minimal—tuning. Unfortunately, these339 claims require a lot of work to support, since340 they require trying the optimizer on new prob-341 lems without using those problems during the342 development of the algorithm. Although our ex-343 periments here are not sufficient to determine344 which optimizers are easiest to tune, experiments like these that operate outside the regime of highly345 tuned learning rate schedules can serve as a starting point. In this experiment, LARS and LAMB do346 not appear to have an advantage in how easy they are to tune even on a dataset and model that were347 used in the development of both of those algorithms. LAMB is a variant of Adam and performs about348 the same as Adam with the same value of ; LARS is more analogous to Momentum and indeed349 Nesterov momentum and LARS have similar performance.350 5 Discussion351 Our results show that standard, generic optimizers suffice for achieving strong results across batch352 sizes. Therefore, any research program to create new optimizers for training at larger batch sizes353 must start from the fact that Momentum, Adam, and likely other standard methods work fine at batch354 sizes as large as those considered in this paper. The LARS and LAMB update rules have no more355 to do with the batch size (or “large” batches) than the Momentum or Adam update rules. Although356 You et al. [2019] presented convergence rate bounds for LARS and LAMB to support their claims357 of superior performance, we show in Appendix A that Adam satisfies a similar bound to LAMB.358 These bounds all rely on very unrealistic assumptions.15 Most of all, they are loose upper bounds359 on the worst case behavior of the algorithms, not accurate reflections of optimizer performance in360 reality. Whether layer-wise normalization can be useful for optimization or regularization remains an361 open question. However, if LARS and LAMB have any advantage over standard techniques, it is not362 that they work dramatically better on the tasks and batch sizes in You et al. [2017, 2019]. This is363 not to suggest that there is nothing interesting about studying neural network optimization at larger364 batch sizes. For example, as gradient noise decreases, there may be opportunities to harness curvature365 information and extend the region of perfect scaling [Zhang et al., 2019]. However, there is currently366 no evidence that LARS and LAMB scale better than Momentum and Adam.367 Our primary concern in this paper has been matching the state of the art—and establishing new368 baselines—for training speed measurements of the sort used to justify new techniques and algorithms369 for training with larger batch sizes. In contrast, many practitioners are more concerned with obtaining370 the best possible validation error with a somewhat flexible training time budget. Part of the reason371 why matching LARS at batch size 32,768 was non-trivial is because getting state of the art training372 15 All convergence bounds assume no momentum is used, and the Lavg bound for LAMB also assumes β2 = 0, when it is typically 0.999. Additionally, Lavg could still be large if L∞ is large, but we leave an empirical analysis of this to future work. speed requires several tricks and implementation details that are not often discussed. It was not373 obvious to us a priori which ones would prove crucial. These details do not involve changes to the374 optimizer, but they interact with the optimizer in a regime where all hyperparameters need to be well375 tuned to stay competitive, making it necessary to re-tune everything for a new optimizer.376 In neural network optimization research, training loss is rarely discussed in detail and evaluation377 centers on validation/test performance since that is what practitioners care most about. However,378 although we shouldn’t only consider training loss, it is counter-intuitive and counter-productive to379 elide a careful investigation of the actual objective of the optimizer. If a new optimizer achieves better380 test performance, but shows no speedup on training loss, then perhaps it is not a better optimizer so381 much as an indirect regularizer. 16 Indeed, in our experiments we found that Nesterov momentum382 achieves noticeably better training accuracy on ResNet-50 than the LARS configuration we used,383 despite reaching roughly the same validation accuracy. Properly disentangling possible regularization384 benefits from optimization speed-ups is crucial if we are to understand neural network training,385 especially at larger batch sizes where we lose some of the regularization effect of gradient noise.386 Hypothetically, if the primary benefit of a training procedure is regularization, then it would be better387 to compare the method with other regularization baselines than other optimizers.388 Ultimately, we only care about batch size to the extent that higher degrees of data parallelism lead389 to faster training. Training with a larger batch size is a means, not the end goal. New optimizers—390 whether designed for generic batch sizes or larger batch sizes—have the potential to dramatically391 improve algorithmic efficiency across multiple workloads, but our results show that standard opti-392 mizers can match the performance of newer alternatives on the workloads we considered. Indeed,393 despite the legion of new update rule variants being proposed in the literature, standard Adam and394 Momentum remain the workhorses of practitioners and researchers alike, while independent empirical395 comparisons consistently find no clear winner when optimizers are compared across a variety of396 workloads [Schmidt et al., 2020]. Meanwhile, as Choi et al. [2019] and our results underscore,397 comparisons between optimizers crucially depend on the effort spent tuning hyperparameters for each398 optimizer. Given these facts, we should regard with extreme caution studies claiming to show the399 superiority of one particular optimizer over others. Part of the issue stems from current incentives in400 the research community; we overvalue the novelty of new methods and undervalue establishing strong401 baselines to measure progress against. This is particularly problematic in the study of optimizers,402 where the learning rate schedule is arguably more important than the choice of the optimizer update403 rule itself! As our results show, the best learning rate schedule is tightly coupled with the optimizer,404 meaning that tuning the learning rate schedule for a new optimizer will generally favor the new405 optimizer over a baseline unless the schedule of the baseline is afforded the same tuning effort.406 6 Conclusion407 In this work, we demonstrated that standard optimizers, without any layer-wise normalization408 techniques, can match or exceed the large batch size results used to justify LARS and LAMB. Future409 work attempting to argue that a new algorithm is useful by comparing to baseline methods or results,410 including those established in this paper, faces a key challenge in showing that the gains are due to the411 new method and not merely due to better tuning or changes to the training pipeline (e.g. regularization412 tricks). Although gains from tuning will eventually saturate, we can, in principle, always invest more413 effort in tuning and potentially get better results for any optimizer. However, our goal should be414 developing optimizers that work better across many different workloads when taking into account the415 amount of additional tuning they require.416 Moving forward, if we are to reliably make progress we need to rethink how we compare and evaluate417 new optimizers for neural network training. Given how sensitive optimizer performance is to the418 hyperparameter tuning protocol and how difficult it is to quantify hyperparameter tuning effort, we419 can’t expect experiments with self-reported baselines to always lead to fair comparisons. Ideally, new420 training methods would be evaluated in a standardized competitive benchmark, where submitters of421 new optimizers do not have full knowledge of the evaluation workloads. Some efforts in this direction422 have started, for instance the MLCommons Algorithmic Efficiency Working Group17, but more work423 needs to be done to produce incentives for the community to publish well-tuned baselines and to424 reward researchers that conduct the most rigorous empirical comparisons.425 16 Deep learning folk wisdom is that “any method to make training less effective can serve as a regularizer,” whether it is a bug in gradients or a clever algorithm. 17 https://mlcommons.org/en/groups/research-algorithms/ Checklist426 1. For all authors...427 (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s428 contributions and scope? [Yes] See Sections 2, 3, 4429 (b) Did you describe the limitations of your work? [Yes] We had a lengthy discussion of430 the limitations and scope of the work in Section 5431 (c) Did you discuss any potential negative societal impacts of your work? [No] We did432 not discuss this in the main text. Our primary contribution is to improve experimental433 protocols for other methodological work, which is so removed from specific machine434 learning applications that it is hard to determine the net impact. That said, more435 effective experimental protocols should lead to more effective science which in turn436 should lead to more effective machine learning applications. Whether this development437 is positive or negative for society will depend on who stands to gain from the use of438 machine learning in future applied contexts. Additionally, although our work should, in439 the long run, save computational resources for individual researchers, in net across the440 community this may or may not produce an aggregate savings because more efficient441 machine learning training, by making larger scale projects more accessible, can lead442 to an increased demand for compute resources [York, 2006], which can have varying443 degrees of negative environmental impacts [Patterson et al., 2021].444 (d) Have you read the ethics review guidelines and ensured that your paper conforms to445 them? [Yes]446 2. If you are including theoretical results...447 (a) Did you state the full set of assumptions of all theoretical results? [Yes] See Appendix A448 for a comprehensive description of the problem setting.449 (b) Did you include complete proofs of all theoretical results? [Yes] See Appendix A.450 3. If you ran experiments...451 (a) Did you include the code, data, and instructions needed to reproduce the main experi-452 mental results (either in the supplemental material or as a URL)? [No] We will include453 a link to all code and all possible reproducibility instructions after the anonymized454 reviewing period is over.455 (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they456 were chosen)? [Yes] We are extremely detailed about our tuning procedures and dataset457 details, see Appendices B, D.458 (c) Did you report error bars (e.g., with respect to the random seed after running experi-459 ments multiple times)? [Yes] While we do not report error bars in the tables in the main460 text, Appendices B.2, C contains box plots showing the quartiles of the distribution461 over random seeds.462 (d) Did you include the total amount of compute and the type of resources used (e.g., type463 of GPUs, internal cluster, or cloud provider)? [No] In Appendix B we state that we464 run on Google TPUs, however we do not tally up the total number of experiments run465 (although an interested reader could compute it from the information we provided in466 our detailed appendices given that we list all intermediate experiments, no matter how467 silly in hindsight).468 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...469 (a) If your work uses existing assets, did you cite the creators? [Yes] We reference the470 relevant citations for all models, datasets, and techniques.471 (b) Did you mention the license of the assets? [No]472 (c) Did you include any new assets either in the supplemental material or as a URL? [No]473 (d) Did you discuss whether and how consent was obtained from people whose data you’re474 using/curating? [N/A]475 (e) Did you discuss whether the data you are using/curating contains personally identifiable476 information or offensive content? [N/A]477 5. If you used crowdsourcing or conducted research with human subjects...478 (a) Did you include the full text of instructions given to participants and screenshots, if479 applicable? [N/A]480 (b) Did you describe any potential participant risks, with links to Institutional Review481 Board (IRB) approvals, if applicable? [N/A]482 (c) Did you include the estimated hourly wage paid to participants and the total amount483 spent on participant compensation? [N/A]484 References485 Martı́n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S.486 Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew487 Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath488 Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,489 Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent490 Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg,491 Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on492 heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from493 tensorflow.org.494 Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, and Cyril Zhang. Disentangling adaptive495 gradient methods from learning rates. arXiv preprint arXiv:2002.11803, 2020.496 Olivier Bousquet, Sylvain Gelly, Karol Kurach, Olivier Teytaud, and Damien Vincent. Critical hyper-497 parameters: No random, no cry. arXiv, 2017. URL https://arxiv.org/abs/1706.03200.498 James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal499 Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and500 Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL501 http://github.com/google/jax.502 Dami Choi, Christopher J Shallue, Zachary Nado, Jaehoon Lee, Chris J Maddison, and George E503 Dahl. On empirical comparisons of optimizers for deep learning. arXiv preprint arXiv:1910.05446,504 2019.505 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep506 bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.507 Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the gen-508 eralization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741,509 2017.510 Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by511 reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.512 Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa,513 Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of514 a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer515 Architecture, pages 1–12, 2017.516 Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint517 arXiv:1412.6980, 2014.518 Sameer Kumar, Victor Bitorff, Dehao Chen, Chiachen Chou, Blake Hechtman, HyoukJoong Lee,519 Naveen Kumar, Peter Mattson, Shibo Wang, Tao Wang, et al. Scale mlperf-0.6 models on google520 tpu-v3 pods. arXiv preprint arXiv:1909.09756, 2019.521 Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson,522 Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debojy-523 oti Dutta, Udit Gupta, Kim Hazelwood, Andrew Hock, Xinyuan Huang, Atsushi Ike, Bill Jia,524 Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Guokai Ma, Deepak Narayanan, Tayo525 Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St.526 John, Tsuguchika Tabaru, Carole-Jean Wu, Lingjie Xu, Masafumi Yamazaki, Cliff Young, and527 Matei Zaharia. MLPerf training benchmark. arXiv preprint arXiv:1910.01500, 2019. URL528 https://arxiv.org/abs/1910.01500.529 Yurii E Nesterov. A method for solving the convex programming problem with convergence rate530 O(1/kˆ2). In Dokl. akad. nauk Sssr, volume 269, pages 543–547, 1983.531 David Patterson, Joseph Gonzalez, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild,532 David So, Maud Texier, and Jeff Dean. Carbon emissions and large neural network training. arXiv533 preprint arXiv:2104.10350, 2021.534 Boris T Polyak. Some methods of speeding up the convergence of iteration methods. USSR535 Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964.536 Robin M Schmidt, Frank Schneider, and Philipp Hennig. Descending through a crowded valley–537 benchmarking deep learning optimizers. arXiv preprint arXiv:2007.01547, 2020.538 Christopher J Shallue, Jaehoon Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and539 George E Dahl. Measuring the effects of data parallelism on neural network training. Journal of540 Machine Learning Research, 20(112):1–49, 2019.541 Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization542 and momentum in deep learning. In ICML, 2013.543 Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking544 the inception architecture for computer vision. In Proceedings of the IEEE conference on computer545 vision and pattern recognition, pages 2818–2826, 2016.546 Yu Emma Wang, Gu-Yeon Wei, and David Brooks. Benchmarking tpu, gpu, and cpu platforms for547 deep learning. arXiv preprint arXiv:1907.10701, 2019.548 Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in549 stochastic meta-optimization. arXiv preprint arXiv:1803.02021, 2018.550 Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, and Youlong Cheng. Image classification at551 supercomputer scale. arXiv preprint arXiv:1811.06992, 2018.552 Richard York. Ecological paradoxes: William stanley jevons and the paperless office. Human Ecology553 Review, pages 143–147, 2006.554 Yang You, Igor Gitman, and Boris Ginsburg. Large batch training of convolutional networks. arXiv555 preprint arXiv:1708.03888, 2017.556 Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan557 Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep558 learning: Training bert in 76 minutes. In International Conference on Learning Representations,559 2019.560 Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George Dahl, Chris561 Shallue, and Roger B Grosse. Which algorithmic choices matter at which batch sizes? insights562 from a noisy quadratic model. In Advances in Neural Information Processing Systems, pages563 8196–8207, 2019.564 Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and565 Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching566 movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer567 Vision (ICCV), ICCV ’15, page 19–27, USA, 2015. IEEE Computer Society. ISBN 9781467383912.568 doi: 10.1109/ICCV.2015.11. URL https://doi.org/10.1109/ICCV.2015.11.569
1. What is the reviewer's main concern regarding the paper? 2. What does the reviewer think about the comparison between Nesterov and LARS/LAMB in the paper? 3. How does the reviewer assess the novelty and contribution of the paper? 4. Does the reviewer have any questions or concerns about the experimental results reported in the paper? 5. What is the reviewer's view on the impact of the paper on the machine learning community?
Summary Of The Paper Review
Summary Of The Paper I carefully read the responses from the authors. However, the authors did not address my concern. I'd like to keep my score. ########################################################################################################### The authors used a huge amount of computing resources to tune hyperparameters of Adam/SGD and claimed that they can match the performance of LARS/LAMB for large-batch training. I think the comparison is not fair. Review The performance of LARS will be much higher if they use the same amount of computing resources to tune hyper-parameters. For ImageNet, the authors did not report the results of a larger batch size (e.g. 64K or 128K). My experiments show that the performance of LARS would be much better than Nesterov for larger batch size. With unlimited computing resources, the performance of LARS will still be better than Nesterov. The authors probably will eventually get results like Figure 7 of https://arxiv.org/pdf/2006.08517v1.pdf If we take a look at Table 1, we can see that the hyper-parameters of Nesterov are meticulously selected or cherry-picked. Since the tuning cost is so high, it is almost impossible for users to consider this solution. For example, to my knowledge, the authors of LARS did not tune the hyper-parameters of Batch Normalization. The authors of this paper used a lot of computing resources to tune the hyper-parameters of Batch Normalization. "We ran a series of experiments, each of which searched over a hand-designed hyper-parameter search space using quasi-random search [Bousquet et al., 2017]." To my knowledge, the LARS authors did not use any advanced hyper-parameter tuning methods. This means the comparison is not fair. "Table 2 shows that Nesterov momentum achieves higher training accuracy than LARS, despite similar validation performance. Thus, it may be more appropriate to consider the layerwise normalization of LARS to be a regularization technique, rather than an optimization technique." This conclusion is doubtful. Firstly, the authors probably need to sample many different convergence points (including low-accuracy points) to study of optimization pattern of LARS/Nesterov. Secondly, the generalization performance of the different minima may have different implicit properties, which can not be explained in such a simple way. BTW, I can't reproduce the authors' results even though I totally trust the correctness of this paper. The reason is that we often need to use different hyper-parameters on different systems/hardware for large-batch training. A high tuning cost makes this process very hard. For BERT, I suspect the authors tuned the hyper-parameters of fine-tuning process, which means the comparison is very unfair as the LAMB authors did not tune it. Even if the authors did not tune the fine-tuning process, this comparison is still unfair because LAMB can work without tuning for changing batch size. As mentioned by the LAMB authors (caption of Table 4 in https://openreview.net/pdf?id=Syx4wnEtvH): "We can achieve an even higher F1 score if we manually tune the hyperparameters" The LAMB authors reported untuned results in Table 4 and Table 5 of https://openreview.net/pdf?id=Syx4wnEtvH The authors of this paper not only tuned common hyper-parameters (e.g. learning rate) but also tuned some uncommon hyper-parameters (e.g. beta1, beta2 in Adam). I suspect only big companies can do this kind of hyper-parameter searching for BERT per-training. The novelty of this paper is very low. The key technical part is just tuning hyper-parameters. The authors also did not provide any deep analysis. If we don't need deep analysis, there are actually many blogposts on this topic (e.g. https://medium.com/fenwicks/tutorial-2-94-accuracy-on-cifar10-in-2-minutes-7b5aaecd9cdd). They can also significantly improve the performance of SGD with different optimization tricks. Similar tricks will likely improve the performance of LARS/LAMB. The key advantage of this paper is that the authors can use a significant amount of computing resources while other researchers can not. I worry that this paper may mislead the ML community and have a negative impact on academia's budget/planning. In my humble opinion, this paper is not a fit to top ML conferences like NeurIPS/ICML/ICLR.
NIPS
Title LGDN: Language-Guided Denoising Network for Video-Language Modeling Abstract Video-language modeling has attracted much attention with the rapid growth of web videos. Most existing methods assume that the video frames and text description are semantically correlated, and focus on video-language modeling at video level. However, this hypothesis often fails for two reasons: (1) With the rich semantics of video contents, it is difficult to cover all frames with a single videolevel description; (2) A raw video typically has noisy/meaningless information (e.g., scenery shot, transition or teaser). Although a number of recent works deploy attention mechanism to alleviate this problem, the irrelevant/noisy information still makes it very difficult to address. To overcome such challenge, we thus propose an efficient and effective model, termed Language-Guided Denoising Network (LGDN), for video-language modeling. Different from most existing methods that utilize all extracted video frames, LGDN dynamically filters out the misaligned or redundant frames under the language supervision and obtains only 2–4 salient frames per video for cross-modal token-level alignment. Extensive experiments on five public datasets show that our LGDN outperforms the state-of-the-arts by large margins. We also provide detailed ablation study to reveal the critical importance of solving the noise issue, in hope of inspiring future video-language work. 1 Introduction Humans are exposed to the world through a variety of sensory organs, such as eyes, ears, and the sense of touch. In the past few years, multi-modal data (e.g., text or video) has grown and accumulated rapidly on the Internet, which brings the increasing demands for video-language understanding. As one of the fundamental topics, video-language modeling is still challenging due to the heterogeneity of the video-text data. More notably, the video-text data is typically noisy (e.g., misaligned or semi-relevant, as shown in Figure 1), leading to intractable video-language modeling. The dominant paradigm [8, 30, 13, 14, 45] for video-language modeling is to first extract language features and dense video features via off-the-shelf language and vision models (e.g., BERT [7], 3D CNN [48]), and then model the cross-modal representation by defining the objective function (e.g., triplet loss [17]) within a joint semantic space. Although achieving great success, these methods typically densely sample frames from a full sequence of raw video to obtain richer representation and thus cost excessive computation. Since the heavy computation makes it challenging to train the whole network end-to-end, they often achieve sub-optimal performance in video-language modeling. Recently, ClipBERT [25] proposes a sparse sampling strategy to tackle this drawback. Concretely, ∗The corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). ClipBERT first samples video frames sparsely (8–16 frames per video), and then models the crossmodal alignment at frame-level. This sparse sampling paradigm enables end-to-end training, leading to much better performance. Nevertheless, token-level cross-modal interaction, which has achieved great success in image-text modeling [21, 26], is still not well explored for video-language modeling due to the heavy resource computation (even with 8–16 frames per video). Moreover, both the dominant paradigm and ClipBERT’s sparse sampling paradigm assume that video frames and the text description (w.r.t. a video-text pair) are semantically correlated, which is often invalid in practice. The correlation hypothesis often fails for two reasons: (1) With the rich semantics of video contents, it is hard to cover all frames with a single video-level description; (2) A raw video often has noisy or meaningless information (e.g., scenery shot, transition or teaser). For the dominant paradigm which utilizes densely-sampled frames, though often with self-attention mechanism [43], the irrelevant/noisy information makes it hard to learn high-quality video-language representation. For the sparse sampling paradigm used in ClipBERT that models the cross-modal alignment at frame-level, the misaligned frame-text pairs are wrongly forced to become closer, which inevitably leads to inaccurate crossmodal alignment. Overall, due to this noise issue (see Figure 1), video-language modeling is still challenging. Note that humans also encounter such problem in reality, but seem to be born with the ability to resist noise. That is, everyone can quickly scan through the entire video, easily ignore the noisy frames and focus on the salient ones given the text. Motivated by this human ability, we propose a Language-Guided Denoising Network termed LGDN to dynamically filter out irrelevant or redundant information under the language supervision for better video-language modeling. Concretely, we devise a Salient Frame Proposal (SFP) mechanism which adopts four strategies to estimate frame-level relevance scores under the language supervision and proposes/selects only salient frames (per video) for precisely video-language modeling. Although the frame embeddings and text embeddings can be (roughly) aligned by introducing a Momentum Video-Level Contrastive Learning (MVCL) module, it is vital to precisely establish frame-text alignment for proposing salient frames. Therefore, based on multiple instance learning (MIL), we propose a Momentum Frame-Level Multiple Salient-instance learning (MSL) -Contrastive Learning (MFCL) module for video-language modeling at frame-level. Finally, with our SFP mechanism, we propose a Language-Guided Salient Frame Matching (LSFM) module for fine-grained alignment, which adopts a token-aware cross-attention Transformer for cross-modal token-level alignment. Our main contributions are as follows: (1) We devise a salient frame proposal mechanism that can dynamically filter out irrelevant information under the language supervision, meanwhile maintaining salient information. (2) We propose an end-to-end framework termed LGDN for video-language modeling with cross-modal interaction at three levels: language-guided salient frame matching at token-level, momentum frame-level MSL-contrastive learning, and momentum video-level contrastive learning. (3) We evaluate our LGDN on five public datasets and find that our LGDN outperforms the latest competitors by large margins. We also provide detailed ablation study to reveal the critical importance of solving the noise issue, in hope of inspiring future video-language work. 2 Related Work Video-Language Modeling. Video-language modeling, a fundamental research topic that is beneficial for search engine and video recommendation, has attracted a lot of attention in recent years with the rapid growth of web videos. Previous works have made great efforts to model richer rep- resentations for video and text modalities and then align the features of the two modalities by the objective function (e.g., triplet loss). One common representative approach [5, 20] is to adopt a Graph Convolution Network (GCN) to extract richer information for video-text retrieval. Another representative approach [30, 13, 14, 52, 29] is to exploit extra experts (e.g., object, motion, speech) for video-language modeling. Recently, ClipBERT [25] proposes a sparse sampling strategy that enables end-to-end training, thus achieving higher performance. Moreover, Frozen in Time [2] also follows a sparse sampling paradigm, and proposes an end-to-end trainable model that is designed to take advantage of both large-scale image and video captioning datasets. However, as illustrated in Figure 1, a raw video typically has noisy/meaningless information, and thus the presence of misaligned frames is inevitable during video-language modeling. Note that most existing methods assume that the video frames and paired text are semantically correlated, without considering the noise phenomenon. Although a self-attention mechanism has been widely applied, the misaligned frames still harm the cross-modal alignment. In this work, we thus propose a salient frame proposal mechanism to effectively (and directly) address this problem. Cross-Modal Alignment Objective Functions Most previous methods adopt triplet loss as a major objective function for video-language modeling. CGMSCD [14] points out that the triplet loss sometimes leads to a wrong learning direction and thus devises an adaptive margin triplet loss for representation learning. More recent works [41, 12, 19] propose to apply the InfoNCE contrastive loss [47, 38, 6] to enhance representation learning. Particularly, BriVL [18], ALBEF [26] and COTS [32] introduce a momentum mechanism [15] to maintain more negative samples for image-text contrastive learning. Following these state-of-the-art models, we propose momentum video-level contrastive learning for video-text global alignment in this paper. Note that MIL-NCE [35] enhances the InfoNCE loss with multiple-instance learning (MIL) to cope with the misaligned narration descriptions in HowTo100M [36]. In this work, we thus propose momentum frame-level MSLcontrastive learning to assist in addressing the misaligned frame problem. 3 Methodology Figure 2 gives a brief overview of our LGDN framework for video-language modeling, which is composed of four main components: 1) language and vision representation extractors; 2) momentum video-level contrastive learning; 3) momentum frame-level MSL-contrastive learning, and 4) language-guided salient frame matching. In the following, we will describe each component in detail. 3.1 Feature Representation Vision Representation. Given an input video V as a sequence of frames {Ei}Ni=1, where N is the length of the video, we utilize a 2-D vision Transformer (e.g., ViT) as our vision backbone to extract frame-level features E = {E1,E2, ...,EN}. Each frame Ei of video V can be represented as Ei = [ecls; e1; ...; ekv−1] ∈ Rkv×Dv , where ecls denotes the [CLS] token, kv denotes the patch sequence length, and Dv denotes the dimension of the patch embeddings. We utilize a fully-connected layer to project the [CLS] token into the frame embedding fei . We then deploy a temporal module T (e.g., a Transformer layer) to aggregate the frame embeddings to obtain the final video embedding: fv = T ([fe1 , f e 2 , ..., f e N ]) = f v(V ), (1) where fv denotes the entire vision (video) encoder. Language Representation. Given an input text L, we utilize BERT-Base as our language backbone to extract text feature L, which can be represented as L = [lcls; l1; ...; lkl−1] ∈ Rkl×Dl , where lcls is the [CLS] token, kl is the token sequence length, and Dl is the dimension of the token embeddings. We deploy a fully-connected layer to project the [CLS] token into the text embedding f l = f l(L), where f l is the language encoder. 3.2 Momentum Video-Level Contrastive Learning (MVCL) Module Note that our LGDN is designed to filter out the unmatched/redundant frames for better token-level alignment, without leveraging the temporal information of the videos explicitly. Therefore, we firstly introduce a Momentum Video-Level Contrastive Learning (MVCL) module to address this problem. The MVCL module utilizes a temporal module (e.g., Transformer block) to aggregate the frame embeddings to obtain the video embedding. Contrastive learning is then applied for holistic video-text alignment. However, video data takes up large GPU memory and the mini-batch size tends to be small with strict resource, which brings harm to contrastive learning. Inspired by MoCo [15], we introduce the momentum mechanism to maintain massive negative samples in memory bank for contrastive learning. Concretely, We firstly maintain video memory bank Mv = {q̂vj} Nm j=1 and text memory bank Ml = {q̂lj} Nm j=1 to store video/text features, where Nm denotes the memory bank size and q̂vj / q̂ l j denotes the j-th stored video/text feature vector. Let f v (with parameters θv) and f̂v (with parameters θ̂v) denote vision encoder and vision momentum encoder, respectively. Similarly, let f l (with parameters θl) and f̂ l (with parameters θ̂l) denote language encoder and language momentum encoder, respectively. The parameters of momentum encoders are updated by: θ̂v = m · θ̂v + (1−m) · θv, θ̂l = m · θ̂l + (1−m) · θl, (2) where m is the momentum coefficient hyper-parameter. The loss function is thus constructed as follow: for each video Vi in mini-batch B, we define the video-to-text contrastive loss between its paired text Li and all negative samples in the text memory bank Ml, resulting in an InfoNCE loss (with τ being the temperature hyper-parameter): LV2T =− 1 |B| ∑ (Vi,Li)∈B log exp(cos(fvi , f̂ l i )/τ) exp(cos(fvi , f̂ l i )/τ)+ ∑ q̂lj∈Ml exp(cos(fvi , q̂ l j)/τ) , (3) where f̂ li = f̂ v(li), and the similarity of two features is measured by the cosine similarity. Similarly, given each text description Li in mini-batch B, we define the text-to-video contrastive loss as: LT2V =− 1 |B| ∑ (Vi,Li)∈B log exp(cos(f li , f̂ v i )/τ) exp(cos(f li , f̂ v i )/τ)+ ∑ q̂vj∈Mv exp(cos(f li , q̂ v j )/τ) , (4) where f̂vi = f̂ v(Vi). Finally, the objective function for MVCL is defined as follows: LMVCL = LV2T + LT2V. (5) 3.3 Salient Frame Proposal (SFP) Mechanism As shown in Figure 1, video-text data inevitably contains misaligned frame-text pairs. Although an attention mechanism has been applied in Eq. (1), the irrelevant and noisy information would still mislead the cross-modal alignment in our model. To alleviate this problem, we thus propose a Salient Frame Proposal (SFP) mechanism for video-language modeling. The core idea of our SFP mechanism is to dynamically filter out misaligned or redundant frames and maintain only a few important frames to represent the video well, which are called as salient frames. Formally, for each video-text pair, we first identify the relevance score R(j|i) between the text Li and the j-th frame Ei,j of the video Vi. Further, we perform language-guided denoising to retain only top-Nsalient salient frames by filtering out the unmatched/redundant frames from each video. Since only video-level annotations are provided, we need to estimate the relevance scores R automatically. As shown in Table 1, we introduce four strategies for estimating relevance scores. (1) SimDot prediction relies on the output of two separate encoders (i.e., frame encoder fe and language encoder f l) to model the relevance score R(j|i) by computing the dot product of the frame embedding fei,j = f e(Ei,j) and the text embedding f li = f l(Li). However, since video-text data is noisy, only utilizing single-modality encoders may result in incorrect salient frames. (2) Momentum prediction improves SimDot prediction by introducing the supervision of momentum encoders (i.e., momentum frame encoder f̂e and momentum language encoder f̂ l), where the momentum frame embedding f̂ei,j = f̂ e(Ei,j) and the momentum text embedding f̂ li = f̂ l(Li). (3) CrossMom prediction considers the frame-text alignment that is directly built on the interaction between one modality encoder and another modality’s momentum encoder. (4) Collaborative prediction combines Momentum prediction and CrossMom prediction for better performance. Although the frame embeddings and text embeddings can be (roughly) aligned through applying video-text contrastive learning in Sec. 3.2, it is vital to precisely establish frame-text alignment for proposing/selecting salient frames. To this end, we introduce the MFCL module below. 3.4 Momentum Frame-Level MSL-Contrastive Learning (MFCL) Module To dynamically filter out the unmatched/redundant frames, we propose to adopt frame-level contrastive learning to directly measure the relevance scores R between video frames and paired text. However, video data often contains misaligned frame-text pairs. Simply applying standard NCE-based contrastive learning would force the misaligned frame-text pairs to be pulled closer, which inevitably has negative effect on learning high-quality frame-text representation. Inspired by MIL-NCE [35], we thus propose a Momentum Frame-Level Multiple Salient-instance Learning (MSL) Contrastive Learning (MFCL) module to assist in alleviating the noise problem. The core idea is to use the salient frames filtered by the SFP Mechanism in each video to form a set of positive candidate pairs, instead of considering each positive pair independently. In this work, we suppose that MFCL and SFP have mutual interdependence so that they can bring boost to each other during training. Similar to MVCL, we additionally maintain a frame-level memory bank Me = {q̂ej′} Nm∗N j′=1 to store frame features, where Nm is the memory bank size, N is the number of sampled frames per video, and q̂ej′ is a stored frame feature vector. Given each text description Li in mini-batch B, we select salient frames filtered by the SFP Mechanism in the paired video Vi to form a set of positive candidate (frame-text) pairs Si and all frame samples in Me to form the negative ones. We then define the text-to-frame contrastive loss as: LT2E =− 1 |B| ∑ (Si,Li)∈B log ∑ f̂eij∈f̂si exp(cos(f li , f̂ e ij)/τ)∑ f̂eij∈f̂si exp(cos(f li , f̂ e ij)/τ)+ ∑ q̂e j′∈M e exp(cos(f li , q̂ e j′)/τ) , (6) where Si = {Ei,j}Nj=1 is the positive frame set of the video Vi, N is the frame sequence length of the video, f li = f l(Li), and f̂si = {f̂eij}Nj=1 = {f̂e(Ei,j)}Nj=1. Similarly, given each positive frame set Si, we define the frame-to-text contrastive loss as: LE2T =− 1 |B| ∑ (Si,Li)∈B log ∑ feij∈fsi exp(cos(feij , f̂ l i )/τ)∑ feij∈fsi exp(cos(feij , f̂ l i )/τ)+ ∑ feij∈fsi ∑ q̂l j′∈M l exp(cos(feij , q̂ l j′)/τ) , (7) where f̂ li = f̂ l(Li) and fsi = {feij}Nj=1 = {fe(Ei,j)}Nj=1 (text memory bank Ml = {q̂lj′} Nm j′=1 is defined in Sec. 3.2). As a result, by combining the text-to-frame and frame-to-text contrastive losses, the objective function for MFCL is given by: LMFCL = LE2T + LT2E. (8) 3.5 Language-Guided Salient Frame Matching (LSFM) Module After obtaining language-guided salient frames, we utilize a multi-modal cross-attention fusion Transformer (see Figure 2) to capture token-level semantic alignment between visual patches and words for better performance (see the design details of this Transformer in the supp. material). Further, we take the [CLS] token embedding outputted by the multi-modal fusion Transformer as the joint representation of a frame-text pair (Vi, Li), and deploy a fully-connected layer to predict the matched probability, which is similar to the sentence pair classification task in BERT’s pre-training phase. The matching loss is defined as: LLSFM = −E(Ei,j ,Li)∼Dsalient logP (yi,j |Ei,j ,Li), (9) where Ei,j denotes j-th frame feature of video Vi, Li denotes text feature, Dsalient is the set of salient frame-text pairs obtained by applying the SFP mechanism to the mini-batch, and yi,j is the ground-truth matching label (0 or 1) of the frame-text pair (Ei,j ,Li). During inference, we use a mean pooling layer to aggregate all salient frame scores as the video-level prediction score. Finally, by combining all the proposed modules for video-language modeling at three levels, we train our LGDN model via minimizing the total objective function: LLGDN = LMVCL + LMFCL + LLSFM. (10) 4 Experiments 4.1 Datasets and Settings Pre-Training Datasets. Due to the restricted computing resources, we follow COTS [32] to pre-train our LGDN on the pure image-text datasets. Our pre-training datasets consists of Conceptual Captions [42], SBU [39], VG [23] and MSCOCO [28], which contains 5.2 million image-text pairs. We additionally apply CC12M [3] (about 2 million URLs are now invalid) for better performance, which accumulates 15.2 million image-text pairs in total. Downstream Datasets. We evaluate our proposed LGDN on four public video-text retrieval datasets: MSR-VTT [50], MSVD [4], DiDeMo [16], and VATEX [46]. To further demonstrate the general applicability of our LGDN, we also carry out experiments on a public video-question answering dataset: MSRVTT-QA [49]. We present the details of these downstream datasets as well as the evaluation metrics for downstream tasks in the supp. material. Implementation Details. Following previous work [25], we sample N = 16 frames per video: each video is equally split into 16 segments and one frame is randomly sampled from each segment. We empirically set the initial learning rate to 1e-5 and adopt AdamW [31] with a weight decay of 0.02 for 5 epochs. In the warm-up stage (first epoch), the model is trained to optimize Eq. (10) without applying SFP mechanism. We also set the other hyper-parameters uniformly as: salient frame numbers Nsalient = 2, mini-batch size |B| = 24, momentum hyper-parameter m = 0.99, temperature τ = 0.07, and queue size Nm = 9, 600. We adopt pre-trained BERT-Base as language encoder and ViT-Base [9] as vision encoder. More details are given in the supp. material. Evaluation Metrics. We adopt two widely-used metrics in cross-modal retrieval: Recall at K (R@K, K= 1, 5, 10), and Median Rank (MdR) / Mean Rank (MnR). R@K means the percentage of correct matching in the K nearest points, and MdR / MnR measures the median / mean rank of target items in the retrieved ranking list. We also report two additional metrics named ‘R@Sum’ and ‘R@Mean’ in our ablation study, which sums/averages all recall metrics for overall evaluation. Following ClipBERT [25], we also report accuracy (Acc) in video-question answering task. 4.2 Ablation Study In this subsection, we conduct comprehensive ablation study to investigate the contributions of different components of our full model. If not specifically indicated, we set N = 16 for global alignment and Nsalient = 2 for token-level alignment as the default setting. Effect of Value Change of Nsalient and N . A common perspective for video/video-language understanding is that more frames per video bring better performance. We thus conduct experiments on the frame number used for token-level alignment in Figure 3(a-b). We sample N = 16 frames from each video and evaluate different variants that use Nsalient ∈ {1, 2, 3, 4, 8, 16} frames. Note that when Nsalient = 16, sampling by our SFP degrades to w/o SFP. It can be observed that utilizing only Nsalient = {2, 3, 4} salient frames filtered by our SFP significantly outperforms utilizing all 16 extracted frames meanwhile enjoying the faster speed (see the green lines). This suggests that our SFP mechanism not only selects correct salient frames but also alleviates the noise problem. To investigate the influence of value change of N on our LGDN, we evenly sample N ∈ {2, 3, 4, 8} frames per video and freeze Nsalient = {1, 2} salient frames. The results in Figure 3(c) indicate that more extracted frames per video are beneficial to the token-level alignment in our LGDN model, as it provides larger candidate set for selecting salient frames. Meanwhile, when N becomes larger (> 4), the performance tends to converge, further demonstrating the redundancy in the videos. Contributions of Each Components. We further demonstrate the contributions of the three objective functions as well as the salient frame proposal (SFP) mechanism used in our full LGDN model in Table 2. We start with the objective function LLSFM (w/o SFP), which means only applying matching loss in token-level alignment without using the SFP mechanism. It can be observed that: (1) LMFCL (and LMVCL) combined with LLSFM (w/o SFP) can bring improvements, suggesting that global alignment is beneficial to token-level alignment (during the training stage). (2) Simply applying the frame-level alignment may cause negative effect while combing with our MSL design brings better results. This demonstrates that our design of LMFCL does help alleviate the noise problem. (3) When the SFP mechanism is added (see LLSFM (w/o SFP) +LMFCL +LMVCL vs. LLSFM +LMFCL +LMVCL), the performance is significantly improved, which clearly shows the effectiveness of our proposed SFP mechanism. (4) For the same trained full LGDN model, combining the global and token-level alignment during inference can bring further improvements. Note that our full LGDN still achieves the state-of-the-art on MSR-VTT even without considering global alignment during inference. 4.3 Comparison to the State-of-the-Arts We first report the text-video retrieval results on MSR-VTT with three data partitions in Table 3. It can be observed that: (1) our LGDN outperforms all previous works by large margins. Particularly, as compared with the most recent model Frozen in Time [2], our LGDN achieves an improvement of 7.9% (38.9% vs. 31.0%) for Text-to-Video R@1 on the MSR-VTT 1k-A test set. (2) Our LGDN also outperforms methods utilizing extra modalities (e.g., motion and audio) or those pre-trained on extremely-large video data (e.g., HowTo100M). (3) When leveraging a much larger pre-training (image-text) dataset, our LGDN (marked with †) achieves significant improvements. To demonstrate the robustness of our model, we also evaluate it on VATEX, MSVD, and Didemo in Tables 4–6, respectively. Due to limited space, only text-to-video retrieval is considered here. For VATEX (Table 4), our LGDN significantly outperforms the state-of-the-art method Support Set which is trained on an order of magnitude more data. Our LGDN still performs the best on MSVD (Table 5) and Didemo (Table 6). Particularly, in the Didemo dataset, each description is annotated with localization information, in other words, annotations may only be aligned with the localized moments, thus causing the noise problem as many methods utilize all frames as the input. Recent works exploit temporal labels of captions to alleviate the noise problem and achieve higher performance. However, even without considering this, our LGDN still largely outperforms the most recent method Frozen [2], further demonstrating the effectiveness of our LGDN. To show the general applicability of our LGDN, we evaluate our LGDN on the VideoQA task in Table 7. Even without utilizing large-scale video datasets devoted to the VideoQA task, our LGDN outperforms all competitors, validating the effectiveness of our LGDN in VideoQA. In addition, to reveal the critical importance of solving the noise issue for video-language modeling, we directly apply the SFP mechanism to the latest model CLIP4Clip [34] in Table 8. We find that applying the SFP mechanism brings boost to Clip4CLIP. The ensemble mechanism further improves the results, indicating that the proposed SFP mechanism is complementary to the baseline. 4.4 Additional Results Applying SFP to Different Frame Sampling Techniques. Note that our SFP mechanism must be combined with a frame sampling technique since we adopt a two-stage sampling strategy in this paper. Thus, we apply our SFP mechanism to three frame sampling techniques: Sparse Sampling, Random Sampling, and Dense Uniform (equally interval sampling). The obtained results on the MSR-VTT 1kA test set are provided in Table 9. It can be observed that our SFP significantly boosts different sampling strategies, further demonstrating the general applicability of our SFP mechanism. Expansion of Relevance Score Estimator. In Sec. 3.3, we have proposed four relevance score estimators for the LSFM module. To find out which is the best, we present the ablation study results for different relevance score estimators in Table 10. We can see a large gap between SFP and random sampling (w/o SFP), directly demonstrating the effectiveness of the proposed SFP mechanism. Meanwhile, both Momentum and CrossMom outperform SimDot, suggesting that introducing momentum encoder is beneficial to relevance score estimation. Collaborative that combines Momentum and CrossMom generally leads to further improvements. Model Capacity. We also provide the detailed comparison to other methods in terms of model capacity and R@SUM (on the MSR-VTT 1kA test set) in Table 11. It can be clearly seen that: (i) When fusion layers are not used (i.e., only global alignment is adopted), our LGDN (global) outperforms the state-of-the-art method Frozen in Time [2], but with much less model parameters. (ii) Our full LGDN performs much better than all the competitors, but its parameter number (215M) is still comparable to that of Frozen in Time (180M) and even significantly smaller than those of the other competitors. These observations suggest that the performance gains obtained by our LGDN is not due to utilizing more model parameters. 4.5 Visualization Results We provide visualization of our LGDN in Figure 4. We uniformly sample 5 frames from each video and provide relevance scores of 5 frames on the left, and the red ones denote salient frames selected by the SFP mechanism. It can be seen that: (1) Although the holistic video is semantically related to the paired text, there still exist noisy frames (e.g., the transition in Frame 1 and Frame 3 of Query7500) and unrelated frames (e.g., in Frame 4 and Frame 5 of Query7544, a man is rolling while the paired text is ‘a car goes racing down the road’). (2) The relevance scores obtained from the SFP mechanism correctly measure the consistency between each frame and the paired text, which indeed helps our LGDN to precisely filter out noisy information for better video-language modeling. 5 Conclusion In this work, we propose a novel Language-Guided Denoising Network (LGDN) for video-language modeling, which can dynamically filter out the unmatched or redundant frames under the language supervision and thus maintain only 2–4 salient frames per video for cross-modal token-level alignment. Extensive experiments on five public datasets show that our LGDN outperforms the state-of-the-arts by large margins. In the future, we will consider aggregating temporal information on salient frames and apply our approach to more challenging video-language tasks (e.g., video grounding). Acknowledgments and Disclosure of Funding This work was supported in part by National Natural Science Foundation of China (61976220 and 61832017), Beijing Outstanding Young Scientist Program (BJJWZYJH012019100020098), and the Research Seed Funds of School of Interdisciplinary Studies, Renmin University of China.
1. What is the main contribution of the paper regarding video-language modeling? 2. What are the strengths of the proposed approach, particularly in addressing the noise issue in raw videos? 3. What are the weaknesses of the paper, especially regarding its limitations and potential negative societal impacts? 4. How does the reviewer assess the novelty and effectiveness of the proposed SFP mechanism and three-level cross-modal interactions? 5. What are the questions raised by the reviewer regarding the paper's content or experimental setup?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper tackles one important issue in video-language modeling, that is, previous works made assumptions that video frames and the text descriptions for the video are semantically correlated. However, this assumption is not valid for the real world <video,text> pairs, since first, the video-level text descriptions may not cover all information in the video, and second, a raw video often contains noisy or meaningless information that does not appear in the text descriptions. The previous correlation assumption did not take account of these noises, and the self-attention mechanism still leaves mis-aligned frames which harm cross-modal alignment. This paper proposed an E2E language-guided denoising network, LGDN, for video-language modeling. LGDN includes (1) a salient frame proposal (SFP) mechanism to dynamically filter out irrelevant or redundant frames and sparsely sample salient frames to improve video-language modeling, (2) cross-modal interactions at three levels, i.e., salient frame matching at the token-level guided by language, momentum frame-level MSL-contrastive learning (MFCL), and momentum video-level contrastive learning (MVCL). Experimental results showed that LGDN outperforms the current SOTA. Ablation studies further demonstrated the importance of tackling the noise issue. Strengths And Weaknesses Strengths: This paper addressed an important limitation in previous works of video-language modeling, that is, the impractical assumption that video frames and the text descriptions for the video are semantically correlated. There is no sufficient tackling of noises in raw videos. The existing employment of self-attention mechanism cannot fully address this problem. These irrelevant or redundant information still harms cross-modal alignment. Hence, this work is valuable to the research community. The SFP mechanism and the three-level cross-modal interactions are sound, well motivated, and have good novelty. SFP is achieved through exploring MVCL and MFCL. After obtaining the language-guided salient frames, LGDN also has a language-guided salient frame matching (LSFM) module to conduct token-level semantic alignment between visual patches and words for improved performance. The final loss of LGDN is the combined loss of MVCL, MFCL, and LSFM. The Related Work section is clearly written. It summarizes significant works in the past and their limitations, and clearly explained the choice of momentum frame-level MSL-contrastive learning to help address the mis-aligned frames. The paper also conduct the interesting investigations of the four strategies for estimating relevance scores. The evaluations are comprehensive, on four public text-video retrieval datasets and one VQA dataset. The evaluation settings support a fair comparison to previous models. Performance gains from LGDN over existing approaches, including SOTA, are quite strong. Experimental results showed that LGDN outperforms previous methods, including the current SOTA, by a large margin on text<->video retrieval on MSR-VTT. LGDN also outperforms methods exploiting extra modalities or those pre-trained on very large video data. LGDN also shows promising model capacity, as its performance is significantly improved when trained on much larger pre-training data. The extensive evaluations on other text-video retrieval datasets and VQA also demonstrated that LGDN outperforms existing approaches including the current SOTA. Ablation studies verified the contributions from the proposed SFP, MVCL, MFCL, and LSFM. Last but not least, the visualization results are interesting and helpful for directly illustrating the importance of addressing the noisy frame issue and showing that the SFP mechanism helps LGDN to improve video-language modeling. Although the code is not released, the main body and Appendix provide enough details to help reproducibility. The Appendix also includes more detailed experimental results on analyzing effect of SFP mechanism, different relevance score estimations, and memory bank sizes, and useful additional visualization results. Weaknesses: The paper missed the important section of Limitations (and potential negative societal impacts). It is highly desired that the authors add discussions on limitations and potential negative societal impacts of this work. Questions The paper is clear written. However, the authors mentioned the pre-training is set up as described in the paper due to restricted computation resources. It would be useful to explain the experimental setup alternative if more computation resources are available, that is, how to scale up the pre-training setup. Limitations The paper missed the important section of Limitations (and potential negative societal impacts). It is highly desired that the authors add discussions on limitations and potential negative societal impacts of this work.
NIPS
Title LGDN: Language-Guided Denoising Network for Video-Language Modeling Abstract Video-language modeling has attracted much attention with the rapid growth of web videos. Most existing methods assume that the video frames and text description are semantically correlated, and focus on video-language modeling at video level. However, this hypothesis often fails for two reasons: (1) With the rich semantics of video contents, it is difficult to cover all frames with a single videolevel description; (2) A raw video typically has noisy/meaningless information (e.g., scenery shot, transition or teaser). Although a number of recent works deploy attention mechanism to alleviate this problem, the irrelevant/noisy information still makes it very difficult to address. To overcome such challenge, we thus propose an efficient and effective model, termed Language-Guided Denoising Network (LGDN), for video-language modeling. Different from most existing methods that utilize all extracted video frames, LGDN dynamically filters out the misaligned or redundant frames under the language supervision and obtains only 2–4 salient frames per video for cross-modal token-level alignment. Extensive experiments on five public datasets show that our LGDN outperforms the state-of-the-arts by large margins. We also provide detailed ablation study to reveal the critical importance of solving the noise issue, in hope of inspiring future video-language work. 1 Introduction Humans are exposed to the world through a variety of sensory organs, such as eyes, ears, and the sense of touch. In the past few years, multi-modal data (e.g., text or video) has grown and accumulated rapidly on the Internet, which brings the increasing demands for video-language understanding. As one of the fundamental topics, video-language modeling is still challenging due to the heterogeneity of the video-text data. More notably, the video-text data is typically noisy (e.g., misaligned or semi-relevant, as shown in Figure 1), leading to intractable video-language modeling. The dominant paradigm [8, 30, 13, 14, 45] for video-language modeling is to first extract language features and dense video features via off-the-shelf language and vision models (e.g., BERT [7], 3D CNN [48]), and then model the cross-modal representation by defining the objective function (e.g., triplet loss [17]) within a joint semantic space. Although achieving great success, these methods typically densely sample frames from a full sequence of raw video to obtain richer representation and thus cost excessive computation. Since the heavy computation makes it challenging to train the whole network end-to-end, they often achieve sub-optimal performance in video-language modeling. Recently, ClipBERT [25] proposes a sparse sampling strategy to tackle this drawback. Concretely, ∗The corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). ClipBERT first samples video frames sparsely (8–16 frames per video), and then models the crossmodal alignment at frame-level. This sparse sampling paradigm enables end-to-end training, leading to much better performance. Nevertheless, token-level cross-modal interaction, which has achieved great success in image-text modeling [21, 26], is still not well explored for video-language modeling due to the heavy resource computation (even with 8–16 frames per video). Moreover, both the dominant paradigm and ClipBERT’s sparse sampling paradigm assume that video frames and the text description (w.r.t. a video-text pair) are semantically correlated, which is often invalid in practice. The correlation hypothesis often fails for two reasons: (1) With the rich semantics of video contents, it is hard to cover all frames with a single video-level description; (2) A raw video often has noisy or meaningless information (e.g., scenery shot, transition or teaser). For the dominant paradigm which utilizes densely-sampled frames, though often with self-attention mechanism [43], the irrelevant/noisy information makes it hard to learn high-quality video-language representation. For the sparse sampling paradigm used in ClipBERT that models the cross-modal alignment at frame-level, the misaligned frame-text pairs are wrongly forced to become closer, which inevitably leads to inaccurate crossmodal alignment. Overall, due to this noise issue (see Figure 1), video-language modeling is still challenging. Note that humans also encounter such problem in reality, but seem to be born with the ability to resist noise. That is, everyone can quickly scan through the entire video, easily ignore the noisy frames and focus on the salient ones given the text. Motivated by this human ability, we propose a Language-Guided Denoising Network termed LGDN to dynamically filter out irrelevant or redundant information under the language supervision for better video-language modeling. Concretely, we devise a Salient Frame Proposal (SFP) mechanism which adopts four strategies to estimate frame-level relevance scores under the language supervision and proposes/selects only salient frames (per video) for precisely video-language modeling. Although the frame embeddings and text embeddings can be (roughly) aligned by introducing a Momentum Video-Level Contrastive Learning (MVCL) module, it is vital to precisely establish frame-text alignment for proposing salient frames. Therefore, based on multiple instance learning (MIL), we propose a Momentum Frame-Level Multiple Salient-instance learning (MSL) -Contrastive Learning (MFCL) module for video-language modeling at frame-level. Finally, with our SFP mechanism, we propose a Language-Guided Salient Frame Matching (LSFM) module for fine-grained alignment, which adopts a token-aware cross-attention Transformer for cross-modal token-level alignment. Our main contributions are as follows: (1) We devise a salient frame proposal mechanism that can dynamically filter out irrelevant information under the language supervision, meanwhile maintaining salient information. (2) We propose an end-to-end framework termed LGDN for video-language modeling with cross-modal interaction at three levels: language-guided salient frame matching at token-level, momentum frame-level MSL-contrastive learning, and momentum video-level contrastive learning. (3) We evaluate our LGDN on five public datasets and find that our LGDN outperforms the latest competitors by large margins. We also provide detailed ablation study to reveal the critical importance of solving the noise issue, in hope of inspiring future video-language work. 2 Related Work Video-Language Modeling. Video-language modeling, a fundamental research topic that is beneficial for search engine and video recommendation, has attracted a lot of attention in recent years with the rapid growth of web videos. Previous works have made great efforts to model richer rep- resentations for video and text modalities and then align the features of the two modalities by the objective function (e.g., triplet loss). One common representative approach [5, 20] is to adopt a Graph Convolution Network (GCN) to extract richer information for video-text retrieval. Another representative approach [30, 13, 14, 52, 29] is to exploit extra experts (e.g., object, motion, speech) for video-language modeling. Recently, ClipBERT [25] proposes a sparse sampling strategy that enables end-to-end training, thus achieving higher performance. Moreover, Frozen in Time [2] also follows a sparse sampling paradigm, and proposes an end-to-end trainable model that is designed to take advantage of both large-scale image and video captioning datasets. However, as illustrated in Figure 1, a raw video typically has noisy/meaningless information, and thus the presence of misaligned frames is inevitable during video-language modeling. Note that most existing methods assume that the video frames and paired text are semantically correlated, without considering the noise phenomenon. Although a self-attention mechanism has been widely applied, the misaligned frames still harm the cross-modal alignment. In this work, we thus propose a salient frame proposal mechanism to effectively (and directly) address this problem. Cross-Modal Alignment Objective Functions Most previous methods adopt triplet loss as a major objective function for video-language modeling. CGMSCD [14] points out that the triplet loss sometimes leads to a wrong learning direction and thus devises an adaptive margin triplet loss for representation learning. More recent works [41, 12, 19] propose to apply the InfoNCE contrastive loss [47, 38, 6] to enhance representation learning. Particularly, BriVL [18], ALBEF [26] and COTS [32] introduce a momentum mechanism [15] to maintain more negative samples for image-text contrastive learning. Following these state-of-the-art models, we propose momentum video-level contrastive learning for video-text global alignment in this paper. Note that MIL-NCE [35] enhances the InfoNCE loss with multiple-instance learning (MIL) to cope with the misaligned narration descriptions in HowTo100M [36]. In this work, we thus propose momentum frame-level MSLcontrastive learning to assist in addressing the misaligned frame problem. 3 Methodology Figure 2 gives a brief overview of our LGDN framework for video-language modeling, which is composed of four main components: 1) language and vision representation extractors; 2) momentum video-level contrastive learning; 3) momentum frame-level MSL-contrastive learning, and 4) language-guided salient frame matching. In the following, we will describe each component in detail. 3.1 Feature Representation Vision Representation. Given an input video V as a sequence of frames {Ei}Ni=1, where N is the length of the video, we utilize a 2-D vision Transformer (e.g., ViT) as our vision backbone to extract frame-level features E = {E1,E2, ...,EN}. Each frame Ei of video V can be represented as Ei = [ecls; e1; ...; ekv−1] ∈ Rkv×Dv , where ecls denotes the [CLS] token, kv denotes the patch sequence length, and Dv denotes the dimension of the patch embeddings. We utilize a fully-connected layer to project the [CLS] token into the frame embedding fei . We then deploy a temporal module T (e.g., a Transformer layer) to aggregate the frame embeddings to obtain the final video embedding: fv = T ([fe1 , f e 2 , ..., f e N ]) = f v(V ), (1) where fv denotes the entire vision (video) encoder. Language Representation. Given an input text L, we utilize BERT-Base as our language backbone to extract text feature L, which can be represented as L = [lcls; l1; ...; lkl−1] ∈ Rkl×Dl , where lcls is the [CLS] token, kl is the token sequence length, and Dl is the dimension of the token embeddings. We deploy a fully-connected layer to project the [CLS] token into the text embedding f l = f l(L), where f l is the language encoder. 3.2 Momentum Video-Level Contrastive Learning (MVCL) Module Note that our LGDN is designed to filter out the unmatched/redundant frames for better token-level alignment, without leveraging the temporal information of the videos explicitly. Therefore, we firstly introduce a Momentum Video-Level Contrastive Learning (MVCL) module to address this problem. The MVCL module utilizes a temporal module (e.g., Transformer block) to aggregate the frame embeddings to obtain the video embedding. Contrastive learning is then applied for holistic video-text alignment. However, video data takes up large GPU memory and the mini-batch size tends to be small with strict resource, which brings harm to contrastive learning. Inspired by MoCo [15], we introduce the momentum mechanism to maintain massive negative samples in memory bank for contrastive learning. Concretely, We firstly maintain video memory bank Mv = {q̂vj} Nm j=1 and text memory bank Ml = {q̂lj} Nm j=1 to store video/text features, where Nm denotes the memory bank size and q̂vj / q̂ l j denotes the j-th stored video/text feature vector. Let f v (with parameters θv) and f̂v (with parameters θ̂v) denote vision encoder and vision momentum encoder, respectively. Similarly, let f l (with parameters θl) and f̂ l (with parameters θ̂l) denote language encoder and language momentum encoder, respectively. The parameters of momentum encoders are updated by: θ̂v = m · θ̂v + (1−m) · θv, θ̂l = m · θ̂l + (1−m) · θl, (2) where m is the momentum coefficient hyper-parameter. The loss function is thus constructed as follow: for each video Vi in mini-batch B, we define the video-to-text contrastive loss between its paired text Li and all negative samples in the text memory bank Ml, resulting in an InfoNCE loss (with τ being the temperature hyper-parameter): LV2T =− 1 |B| ∑ (Vi,Li)∈B log exp(cos(fvi , f̂ l i )/τ) exp(cos(fvi , f̂ l i )/τ)+ ∑ q̂lj∈Ml exp(cos(fvi , q̂ l j)/τ) , (3) where f̂ li = f̂ v(li), and the similarity of two features is measured by the cosine similarity. Similarly, given each text description Li in mini-batch B, we define the text-to-video contrastive loss as: LT2V =− 1 |B| ∑ (Vi,Li)∈B log exp(cos(f li , f̂ v i )/τ) exp(cos(f li , f̂ v i )/τ)+ ∑ q̂vj∈Mv exp(cos(f li , q̂ v j )/τ) , (4) where f̂vi = f̂ v(Vi). Finally, the objective function for MVCL is defined as follows: LMVCL = LV2T + LT2V. (5) 3.3 Salient Frame Proposal (SFP) Mechanism As shown in Figure 1, video-text data inevitably contains misaligned frame-text pairs. Although an attention mechanism has been applied in Eq. (1), the irrelevant and noisy information would still mislead the cross-modal alignment in our model. To alleviate this problem, we thus propose a Salient Frame Proposal (SFP) mechanism for video-language modeling. The core idea of our SFP mechanism is to dynamically filter out misaligned or redundant frames and maintain only a few important frames to represent the video well, which are called as salient frames. Formally, for each video-text pair, we first identify the relevance score R(j|i) between the text Li and the j-th frame Ei,j of the video Vi. Further, we perform language-guided denoising to retain only top-Nsalient salient frames by filtering out the unmatched/redundant frames from each video. Since only video-level annotations are provided, we need to estimate the relevance scores R automatically. As shown in Table 1, we introduce four strategies for estimating relevance scores. (1) SimDot prediction relies on the output of two separate encoders (i.e., frame encoder fe and language encoder f l) to model the relevance score R(j|i) by computing the dot product of the frame embedding fei,j = f e(Ei,j) and the text embedding f li = f l(Li). However, since video-text data is noisy, only utilizing single-modality encoders may result in incorrect salient frames. (2) Momentum prediction improves SimDot prediction by introducing the supervision of momentum encoders (i.e., momentum frame encoder f̂e and momentum language encoder f̂ l), where the momentum frame embedding f̂ei,j = f̂ e(Ei,j) and the momentum text embedding f̂ li = f̂ l(Li). (3) CrossMom prediction considers the frame-text alignment that is directly built on the interaction between one modality encoder and another modality’s momentum encoder. (4) Collaborative prediction combines Momentum prediction and CrossMom prediction for better performance. Although the frame embeddings and text embeddings can be (roughly) aligned through applying video-text contrastive learning in Sec. 3.2, it is vital to precisely establish frame-text alignment for proposing/selecting salient frames. To this end, we introduce the MFCL module below. 3.4 Momentum Frame-Level MSL-Contrastive Learning (MFCL) Module To dynamically filter out the unmatched/redundant frames, we propose to adopt frame-level contrastive learning to directly measure the relevance scores R between video frames and paired text. However, video data often contains misaligned frame-text pairs. Simply applying standard NCE-based contrastive learning would force the misaligned frame-text pairs to be pulled closer, which inevitably has negative effect on learning high-quality frame-text representation. Inspired by MIL-NCE [35], we thus propose a Momentum Frame-Level Multiple Salient-instance Learning (MSL) Contrastive Learning (MFCL) module to assist in alleviating the noise problem. The core idea is to use the salient frames filtered by the SFP Mechanism in each video to form a set of positive candidate pairs, instead of considering each positive pair independently. In this work, we suppose that MFCL and SFP have mutual interdependence so that they can bring boost to each other during training. Similar to MVCL, we additionally maintain a frame-level memory bank Me = {q̂ej′} Nm∗N j′=1 to store frame features, where Nm is the memory bank size, N is the number of sampled frames per video, and q̂ej′ is a stored frame feature vector. Given each text description Li in mini-batch B, we select salient frames filtered by the SFP Mechanism in the paired video Vi to form a set of positive candidate (frame-text) pairs Si and all frame samples in Me to form the negative ones. We then define the text-to-frame contrastive loss as: LT2E =− 1 |B| ∑ (Si,Li)∈B log ∑ f̂eij∈f̂si exp(cos(f li , f̂ e ij)/τ)∑ f̂eij∈f̂si exp(cos(f li , f̂ e ij)/τ)+ ∑ q̂e j′∈M e exp(cos(f li , q̂ e j′)/τ) , (6) where Si = {Ei,j}Nj=1 is the positive frame set of the video Vi, N is the frame sequence length of the video, f li = f l(Li), and f̂si = {f̂eij}Nj=1 = {f̂e(Ei,j)}Nj=1. Similarly, given each positive frame set Si, we define the frame-to-text contrastive loss as: LE2T =− 1 |B| ∑ (Si,Li)∈B log ∑ feij∈fsi exp(cos(feij , f̂ l i )/τ)∑ feij∈fsi exp(cos(feij , f̂ l i )/τ)+ ∑ feij∈fsi ∑ q̂l j′∈M l exp(cos(feij , q̂ l j′)/τ) , (7) where f̂ li = f̂ l(Li) and fsi = {feij}Nj=1 = {fe(Ei,j)}Nj=1 (text memory bank Ml = {q̂lj′} Nm j′=1 is defined in Sec. 3.2). As a result, by combining the text-to-frame and frame-to-text contrastive losses, the objective function for MFCL is given by: LMFCL = LE2T + LT2E. (8) 3.5 Language-Guided Salient Frame Matching (LSFM) Module After obtaining language-guided salient frames, we utilize a multi-modal cross-attention fusion Transformer (see Figure 2) to capture token-level semantic alignment between visual patches and words for better performance (see the design details of this Transformer in the supp. material). Further, we take the [CLS] token embedding outputted by the multi-modal fusion Transformer as the joint representation of a frame-text pair (Vi, Li), and deploy a fully-connected layer to predict the matched probability, which is similar to the sentence pair classification task in BERT’s pre-training phase. The matching loss is defined as: LLSFM = −E(Ei,j ,Li)∼Dsalient logP (yi,j |Ei,j ,Li), (9) where Ei,j denotes j-th frame feature of video Vi, Li denotes text feature, Dsalient is the set of salient frame-text pairs obtained by applying the SFP mechanism to the mini-batch, and yi,j is the ground-truth matching label (0 or 1) of the frame-text pair (Ei,j ,Li). During inference, we use a mean pooling layer to aggregate all salient frame scores as the video-level prediction score. Finally, by combining all the proposed modules for video-language modeling at three levels, we train our LGDN model via minimizing the total objective function: LLGDN = LMVCL + LMFCL + LLSFM. (10) 4 Experiments 4.1 Datasets and Settings Pre-Training Datasets. Due to the restricted computing resources, we follow COTS [32] to pre-train our LGDN on the pure image-text datasets. Our pre-training datasets consists of Conceptual Captions [42], SBU [39], VG [23] and MSCOCO [28], which contains 5.2 million image-text pairs. We additionally apply CC12M [3] (about 2 million URLs are now invalid) for better performance, which accumulates 15.2 million image-text pairs in total. Downstream Datasets. We evaluate our proposed LGDN on four public video-text retrieval datasets: MSR-VTT [50], MSVD [4], DiDeMo [16], and VATEX [46]. To further demonstrate the general applicability of our LGDN, we also carry out experiments on a public video-question answering dataset: MSRVTT-QA [49]. We present the details of these downstream datasets as well as the evaluation metrics for downstream tasks in the supp. material. Implementation Details. Following previous work [25], we sample N = 16 frames per video: each video is equally split into 16 segments and one frame is randomly sampled from each segment. We empirically set the initial learning rate to 1e-5 and adopt AdamW [31] with a weight decay of 0.02 for 5 epochs. In the warm-up stage (first epoch), the model is trained to optimize Eq. (10) without applying SFP mechanism. We also set the other hyper-parameters uniformly as: salient frame numbers Nsalient = 2, mini-batch size |B| = 24, momentum hyper-parameter m = 0.99, temperature τ = 0.07, and queue size Nm = 9, 600. We adopt pre-trained BERT-Base as language encoder and ViT-Base [9] as vision encoder. More details are given in the supp. material. Evaluation Metrics. We adopt two widely-used metrics in cross-modal retrieval: Recall at K (R@K, K= 1, 5, 10), and Median Rank (MdR) / Mean Rank (MnR). R@K means the percentage of correct matching in the K nearest points, and MdR / MnR measures the median / mean rank of target items in the retrieved ranking list. We also report two additional metrics named ‘R@Sum’ and ‘R@Mean’ in our ablation study, which sums/averages all recall metrics for overall evaluation. Following ClipBERT [25], we also report accuracy (Acc) in video-question answering task. 4.2 Ablation Study In this subsection, we conduct comprehensive ablation study to investigate the contributions of different components of our full model. If not specifically indicated, we set N = 16 for global alignment and Nsalient = 2 for token-level alignment as the default setting. Effect of Value Change of Nsalient and N . A common perspective for video/video-language understanding is that more frames per video bring better performance. We thus conduct experiments on the frame number used for token-level alignment in Figure 3(a-b). We sample N = 16 frames from each video and evaluate different variants that use Nsalient ∈ {1, 2, 3, 4, 8, 16} frames. Note that when Nsalient = 16, sampling by our SFP degrades to w/o SFP. It can be observed that utilizing only Nsalient = {2, 3, 4} salient frames filtered by our SFP significantly outperforms utilizing all 16 extracted frames meanwhile enjoying the faster speed (see the green lines). This suggests that our SFP mechanism not only selects correct salient frames but also alleviates the noise problem. To investigate the influence of value change of N on our LGDN, we evenly sample N ∈ {2, 3, 4, 8} frames per video and freeze Nsalient = {1, 2} salient frames. The results in Figure 3(c) indicate that more extracted frames per video are beneficial to the token-level alignment in our LGDN model, as it provides larger candidate set for selecting salient frames. Meanwhile, when N becomes larger (> 4), the performance tends to converge, further demonstrating the redundancy in the videos. Contributions of Each Components. We further demonstrate the contributions of the three objective functions as well as the salient frame proposal (SFP) mechanism used in our full LGDN model in Table 2. We start with the objective function LLSFM (w/o SFP), which means only applying matching loss in token-level alignment without using the SFP mechanism. It can be observed that: (1) LMFCL (and LMVCL) combined with LLSFM (w/o SFP) can bring improvements, suggesting that global alignment is beneficial to token-level alignment (during the training stage). (2) Simply applying the frame-level alignment may cause negative effect while combing with our MSL design brings better results. This demonstrates that our design of LMFCL does help alleviate the noise problem. (3) When the SFP mechanism is added (see LLSFM (w/o SFP) +LMFCL +LMVCL vs. LLSFM +LMFCL +LMVCL), the performance is significantly improved, which clearly shows the effectiveness of our proposed SFP mechanism. (4) For the same trained full LGDN model, combining the global and token-level alignment during inference can bring further improvements. Note that our full LGDN still achieves the state-of-the-art on MSR-VTT even without considering global alignment during inference. 4.3 Comparison to the State-of-the-Arts We first report the text-video retrieval results on MSR-VTT with three data partitions in Table 3. It can be observed that: (1) our LGDN outperforms all previous works by large margins. Particularly, as compared with the most recent model Frozen in Time [2], our LGDN achieves an improvement of 7.9% (38.9% vs. 31.0%) for Text-to-Video R@1 on the MSR-VTT 1k-A test set. (2) Our LGDN also outperforms methods utilizing extra modalities (e.g., motion and audio) or those pre-trained on extremely-large video data (e.g., HowTo100M). (3) When leveraging a much larger pre-training (image-text) dataset, our LGDN (marked with †) achieves significant improvements. To demonstrate the robustness of our model, we also evaluate it on VATEX, MSVD, and Didemo in Tables 4–6, respectively. Due to limited space, only text-to-video retrieval is considered here. For VATEX (Table 4), our LGDN significantly outperforms the state-of-the-art method Support Set which is trained on an order of magnitude more data. Our LGDN still performs the best on MSVD (Table 5) and Didemo (Table 6). Particularly, in the Didemo dataset, each description is annotated with localization information, in other words, annotations may only be aligned with the localized moments, thus causing the noise problem as many methods utilize all frames as the input. Recent works exploit temporal labels of captions to alleviate the noise problem and achieve higher performance. However, even without considering this, our LGDN still largely outperforms the most recent method Frozen [2], further demonstrating the effectiveness of our LGDN. To show the general applicability of our LGDN, we evaluate our LGDN on the VideoQA task in Table 7. Even without utilizing large-scale video datasets devoted to the VideoQA task, our LGDN outperforms all competitors, validating the effectiveness of our LGDN in VideoQA. In addition, to reveal the critical importance of solving the noise issue for video-language modeling, we directly apply the SFP mechanism to the latest model CLIP4Clip [34] in Table 8. We find that applying the SFP mechanism brings boost to Clip4CLIP. The ensemble mechanism further improves the results, indicating that the proposed SFP mechanism is complementary to the baseline. 4.4 Additional Results Applying SFP to Different Frame Sampling Techniques. Note that our SFP mechanism must be combined with a frame sampling technique since we adopt a two-stage sampling strategy in this paper. Thus, we apply our SFP mechanism to three frame sampling techniques: Sparse Sampling, Random Sampling, and Dense Uniform (equally interval sampling). The obtained results on the MSR-VTT 1kA test set are provided in Table 9. It can be observed that our SFP significantly boosts different sampling strategies, further demonstrating the general applicability of our SFP mechanism. Expansion of Relevance Score Estimator. In Sec. 3.3, we have proposed four relevance score estimators for the LSFM module. To find out which is the best, we present the ablation study results for different relevance score estimators in Table 10. We can see a large gap between SFP and random sampling (w/o SFP), directly demonstrating the effectiveness of the proposed SFP mechanism. Meanwhile, both Momentum and CrossMom outperform SimDot, suggesting that introducing momentum encoder is beneficial to relevance score estimation. Collaborative that combines Momentum and CrossMom generally leads to further improvements. Model Capacity. We also provide the detailed comparison to other methods in terms of model capacity and R@SUM (on the MSR-VTT 1kA test set) in Table 11. It can be clearly seen that: (i) When fusion layers are not used (i.e., only global alignment is adopted), our LGDN (global) outperforms the state-of-the-art method Frozen in Time [2], but with much less model parameters. (ii) Our full LGDN performs much better than all the competitors, but its parameter number (215M) is still comparable to that of Frozen in Time (180M) and even significantly smaller than those of the other competitors. These observations suggest that the performance gains obtained by our LGDN is not due to utilizing more model parameters. 4.5 Visualization Results We provide visualization of our LGDN in Figure 4. We uniformly sample 5 frames from each video and provide relevance scores of 5 frames on the left, and the red ones denote salient frames selected by the SFP mechanism. It can be seen that: (1) Although the holistic video is semantically related to the paired text, there still exist noisy frames (e.g., the transition in Frame 1 and Frame 3 of Query7500) and unrelated frames (e.g., in Frame 4 and Frame 5 of Query7544, a man is rolling while the paired text is ‘a car goes racing down the road’). (2) The relevance scores obtained from the SFP mechanism correctly measure the consistency between each frame and the paired text, which indeed helps our LGDN to precisely filter out noisy information for better video-language modeling. 5 Conclusion In this work, we propose a novel Language-Guided Denoising Network (LGDN) for video-language modeling, which can dynamically filter out the unmatched or redundant frames under the language supervision and thus maintain only 2–4 salient frames per video for cross-modal token-level alignment. Extensive experiments on five public datasets show that our LGDN outperforms the state-of-the-arts by large margins. In the future, we will consider aggregating temporal information on salient frames and apply our approach to more challenging video-language tasks (e.g., video grounding). Acknowledgments and Disclosure of Funding This work was supported in part by National Natural Science Foundation of China (61976220 and 61832017), Beijing Outstanding Young Scientist Program (BJJWZYJH012019100020098), and the Research Seed Funds of School of Interdisciplinary Studies, Renmin University of China.
1. What is the novel approach introduced by the paper in selecting salient frames for multi-modal models? 2. How effective is the proposed method in improving fine-tuned accuracy and computation efficiency? 3. Are there any concerns regarding the model capacity and memory efficiency of the proposed approach compared to other methods? 4. How does the SFP address the misalignment issue, and what are the diversity loss measures taken to prevent similar frames from being selected? 5. Can the authors provide more explanations or clarifications regarding the global and local alignments used in the inference phase? 6. Will the salience sampling used in the proposed approach limit its application to other video tasks that rely heavily on temporal information?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper introduces LGDN, which utilizes a self-taught mechanism to select salient frames to train the fusion layers. The vision and language encoder still use all frames and texts to train, and their goal is to learn representations while computing relevance scores for the frame selection. The whole mechanism by contrastive learning helps to filter out noisy information in the last few layers, boosting performance while reducing computation costs. They also demonstrate better empirical results over the baselines. Strengths And Weaknesses Pros: The paper proposes an effective method to select frames from noisy video, leading to an improvement in both fine-tuned accuracy and computation efficiency. To address the noisy problem in video is novel to me, it might become a new approach to constructing multi-modal models. There is no perfect and exact matching between any kinds of pair data so that the impact can be extended to other domains. Cons: The empirical results are not fully convincing to me. I think it would be better if the authors can report the model capacity of each method (see Questions 3). The central claim of this paper is that noisy video might harm the training, but the model still highly relies on learning on noisy data as the vision and language encoder still use full video to learn how to match the relevance between language and frames. It seems to me that the redundancy issue has not been addressed by LGDN (see Question 2), and the sampling might introduce another problem (see Question 3). Questions ClipBERT samples frames before feeding them into the model while your approach does the sampling only before fusion layers, so I wonder about the memory efficiency and speed of LGDN compared to ClipBERT, Frozen in Time, and other methods. I understand that the SFP might address the misalignment issue, but how it can help to ease the redundancy in the video. If you compute the relevance score between text and frames, it's likely that similar frames have similar scores, and I am not sure what can prevent LGDN to select the other frames that are similar to the salient frame. I feel like some diversity loss might be necessary to address this problem. What's the model size of other baselines? LGDN is composed of vision, language, and fusion Transformers while Frozen in Time doesn't have a fusion Transformer, making me concerned if the performance gain somewhat comes from the bigger models. For better comparison, you can try to use lesser layers in vision and language encoders. What are global and local alignments used in the inference phase? I didn't find an explanation in the paper. Why does using global inference generates such low performance? The salience sampling breaks the temporal dependency of the video (compared to uniform sampling), which is an important characteristic to solve most video tasks. Do you think this sampling will prevent LGDN from applying to other video tasks that highly depend on temporal information? Limitations The paper's writing can be improved. The method has many components, and I was confused about which part addresses what problem at the beginning. But in general, I still understand the idea and contribution of the paper. My main concern is about the fairness of the experiments and some questions about the technical approach (refer to Questions). I will raise the score if those problems are addressed.
NIPS
Title LGDN: Language-Guided Denoising Network for Video-Language Modeling Abstract Video-language modeling has attracted much attention with the rapid growth of web videos. Most existing methods assume that the video frames and text description are semantically correlated, and focus on video-language modeling at video level. However, this hypothesis often fails for two reasons: (1) With the rich semantics of video contents, it is difficult to cover all frames with a single videolevel description; (2) A raw video typically has noisy/meaningless information (e.g., scenery shot, transition or teaser). Although a number of recent works deploy attention mechanism to alleviate this problem, the irrelevant/noisy information still makes it very difficult to address. To overcome such challenge, we thus propose an efficient and effective model, termed Language-Guided Denoising Network (LGDN), for video-language modeling. Different from most existing methods that utilize all extracted video frames, LGDN dynamically filters out the misaligned or redundant frames under the language supervision and obtains only 2–4 salient frames per video for cross-modal token-level alignment. Extensive experiments on five public datasets show that our LGDN outperforms the state-of-the-arts by large margins. We also provide detailed ablation study to reveal the critical importance of solving the noise issue, in hope of inspiring future video-language work. 1 Introduction Humans are exposed to the world through a variety of sensory organs, such as eyes, ears, and the sense of touch. In the past few years, multi-modal data (e.g., text or video) has grown and accumulated rapidly on the Internet, which brings the increasing demands for video-language understanding. As one of the fundamental topics, video-language modeling is still challenging due to the heterogeneity of the video-text data. More notably, the video-text data is typically noisy (e.g., misaligned or semi-relevant, as shown in Figure 1), leading to intractable video-language modeling. The dominant paradigm [8, 30, 13, 14, 45] for video-language modeling is to first extract language features and dense video features via off-the-shelf language and vision models (e.g., BERT [7], 3D CNN [48]), and then model the cross-modal representation by defining the objective function (e.g., triplet loss [17]) within a joint semantic space. Although achieving great success, these methods typically densely sample frames from a full sequence of raw video to obtain richer representation and thus cost excessive computation. Since the heavy computation makes it challenging to train the whole network end-to-end, they often achieve sub-optimal performance in video-language modeling. Recently, ClipBERT [25] proposes a sparse sampling strategy to tackle this drawback. Concretely, ∗The corresponding author. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). ClipBERT first samples video frames sparsely (8–16 frames per video), and then models the crossmodal alignment at frame-level. This sparse sampling paradigm enables end-to-end training, leading to much better performance. Nevertheless, token-level cross-modal interaction, which has achieved great success in image-text modeling [21, 26], is still not well explored for video-language modeling due to the heavy resource computation (even with 8–16 frames per video). Moreover, both the dominant paradigm and ClipBERT’s sparse sampling paradigm assume that video frames and the text description (w.r.t. a video-text pair) are semantically correlated, which is often invalid in practice. The correlation hypothesis often fails for two reasons: (1) With the rich semantics of video contents, it is hard to cover all frames with a single video-level description; (2) A raw video often has noisy or meaningless information (e.g., scenery shot, transition or teaser). For the dominant paradigm which utilizes densely-sampled frames, though often with self-attention mechanism [43], the irrelevant/noisy information makes it hard to learn high-quality video-language representation. For the sparse sampling paradigm used in ClipBERT that models the cross-modal alignment at frame-level, the misaligned frame-text pairs are wrongly forced to become closer, which inevitably leads to inaccurate crossmodal alignment. Overall, due to this noise issue (see Figure 1), video-language modeling is still challenging. Note that humans also encounter such problem in reality, but seem to be born with the ability to resist noise. That is, everyone can quickly scan through the entire video, easily ignore the noisy frames and focus on the salient ones given the text. Motivated by this human ability, we propose a Language-Guided Denoising Network termed LGDN to dynamically filter out irrelevant or redundant information under the language supervision for better video-language modeling. Concretely, we devise a Salient Frame Proposal (SFP) mechanism which adopts four strategies to estimate frame-level relevance scores under the language supervision and proposes/selects only salient frames (per video) for precisely video-language modeling. Although the frame embeddings and text embeddings can be (roughly) aligned by introducing a Momentum Video-Level Contrastive Learning (MVCL) module, it is vital to precisely establish frame-text alignment for proposing salient frames. Therefore, based on multiple instance learning (MIL), we propose a Momentum Frame-Level Multiple Salient-instance learning (MSL) -Contrastive Learning (MFCL) module for video-language modeling at frame-level. Finally, with our SFP mechanism, we propose a Language-Guided Salient Frame Matching (LSFM) module for fine-grained alignment, which adopts a token-aware cross-attention Transformer for cross-modal token-level alignment. Our main contributions are as follows: (1) We devise a salient frame proposal mechanism that can dynamically filter out irrelevant information under the language supervision, meanwhile maintaining salient information. (2) We propose an end-to-end framework termed LGDN for video-language modeling with cross-modal interaction at three levels: language-guided salient frame matching at token-level, momentum frame-level MSL-contrastive learning, and momentum video-level contrastive learning. (3) We evaluate our LGDN on five public datasets and find that our LGDN outperforms the latest competitors by large margins. We also provide detailed ablation study to reveal the critical importance of solving the noise issue, in hope of inspiring future video-language work. 2 Related Work Video-Language Modeling. Video-language modeling, a fundamental research topic that is beneficial for search engine and video recommendation, has attracted a lot of attention in recent years with the rapid growth of web videos. Previous works have made great efforts to model richer rep- resentations for video and text modalities and then align the features of the two modalities by the objective function (e.g., triplet loss). One common representative approach [5, 20] is to adopt a Graph Convolution Network (GCN) to extract richer information for video-text retrieval. Another representative approach [30, 13, 14, 52, 29] is to exploit extra experts (e.g., object, motion, speech) for video-language modeling. Recently, ClipBERT [25] proposes a sparse sampling strategy that enables end-to-end training, thus achieving higher performance. Moreover, Frozen in Time [2] also follows a sparse sampling paradigm, and proposes an end-to-end trainable model that is designed to take advantage of both large-scale image and video captioning datasets. However, as illustrated in Figure 1, a raw video typically has noisy/meaningless information, and thus the presence of misaligned frames is inevitable during video-language modeling. Note that most existing methods assume that the video frames and paired text are semantically correlated, without considering the noise phenomenon. Although a self-attention mechanism has been widely applied, the misaligned frames still harm the cross-modal alignment. In this work, we thus propose a salient frame proposal mechanism to effectively (and directly) address this problem. Cross-Modal Alignment Objective Functions Most previous methods adopt triplet loss as a major objective function for video-language modeling. CGMSCD [14] points out that the triplet loss sometimes leads to a wrong learning direction and thus devises an adaptive margin triplet loss for representation learning. More recent works [41, 12, 19] propose to apply the InfoNCE contrastive loss [47, 38, 6] to enhance representation learning. Particularly, BriVL [18], ALBEF [26] and COTS [32] introduce a momentum mechanism [15] to maintain more negative samples for image-text contrastive learning. Following these state-of-the-art models, we propose momentum video-level contrastive learning for video-text global alignment in this paper. Note that MIL-NCE [35] enhances the InfoNCE loss with multiple-instance learning (MIL) to cope with the misaligned narration descriptions in HowTo100M [36]. In this work, we thus propose momentum frame-level MSLcontrastive learning to assist in addressing the misaligned frame problem. 3 Methodology Figure 2 gives a brief overview of our LGDN framework for video-language modeling, which is composed of four main components: 1) language and vision representation extractors; 2) momentum video-level contrastive learning; 3) momentum frame-level MSL-contrastive learning, and 4) language-guided salient frame matching. In the following, we will describe each component in detail. 3.1 Feature Representation Vision Representation. Given an input video V as a sequence of frames {Ei}Ni=1, where N is the length of the video, we utilize a 2-D vision Transformer (e.g., ViT) as our vision backbone to extract frame-level features E = {E1,E2, ...,EN}. Each frame Ei of video V can be represented as Ei = [ecls; e1; ...; ekv−1] ∈ Rkv×Dv , where ecls denotes the [CLS] token, kv denotes the patch sequence length, and Dv denotes the dimension of the patch embeddings. We utilize a fully-connected layer to project the [CLS] token into the frame embedding fei . We then deploy a temporal module T (e.g., a Transformer layer) to aggregate the frame embeddings to obtain the final video embedding: fv = T ([fe1 , f e 2 , ..., f e N ]) = f v(V ), (1) where fv denotes the entire vision (video) encoder. Language Representation. Given an input text L, we utilize BERT-Base as our language backbone to extract text feature L, which can be represented as L = [lcls; l1; ...; lkl−1] ∈ Rkl×Dl , where lcls is the [CLS] token, kl is the token sequence length, and Dl is the dimension of the token embeddings. We deploy a fully-connected layer to project the [CLS] token into the text embedding f l = f l(L), where f l is the language encoder. 3.2 Momentum Video-Level Contrastive Learning (MVCL) Module Note that our LGDN is designed to filter out the unmatched/redundant frames for better token-level alignment, without leveraging the temporal information of the videos explicitly. Therefore, we firstly introduce a Momentum Video-Level Contrastive Learning (MVCL) module to address this problem. The MVCL module utilizes a temporal module (e.g., Transformer block) to aggregate the frame embeddings to obtain the video embedding. Contrastive learning is then applied for holistic video-text alignment. However, video data takes up large GPU memory and the mini-batch size tends to be small with strict resource, which brings harm to contrastive learning. Inspired by MoCo [15], we introduce the momentum mechanism to maintain massive negative samples in memory bank for contrastive learning. Concretely, We firstly maintain video memory bank Mv = {q̂vj} Nm j=1 and text memory bank Ml = {q̂lj} Nm j=1 to store video/text features, where Nm denotes the memory bank size and q̂vj / q̂ l j denotes the j-th stored video/text feature vector. Let f v (with parameters θv) and f̂v (with parameters θ̂v) denote vision encoder and vision momentum encoder, respectively. Similarly, let f l (with parameters θl) and f̂ l (with parameters θ̂l) denote language encoder and language momentum encoder, respectively. The parameters of momentum encoders are updated by: θ̂v = m · θ̂v + (1−m) · θv, θ̂l = m · θ̂l + (1−m) · θl, (2) where m is the momentum coefficient hyper-parameter. The loss function is thus constructed as follow: for each video Vi in mini-batch B, we define the video-to-text contrastive loss between its paired text Li and all negative samples in the text memory bank Ml, resulting in an InfoNCE loss (with τ being the temperature hyper-parameter): LV2T =− 1 |B| ∑ (Vi,Li)∈B log exp(cos(fvi , f̂ l i )/τ) exp(cos(fvi , f̂ l i )/τ)+ ∑ q̂lj∈Ml exp(cos(fvi , q̂ l j)/τ) , (3) where f̂ li = f̂ v(li), and the similarity of two features is measured by the cosine similarity. Similarly, given each text description Li in mini-batch B, we define the text-to-video contrastive loss as: LT2V =− 1 |B| ∑ (Vi,Li)∈B log exp(cos(f li , f̂ v i )/τ) exp(cos(f li , f̂ v i )/τ)+ ∑ q̂vj∈Mv exp(cos(f li , q̂ v j )/τ) , (4) where f̂vi = f̂ v(Vi). Finally, the objective function for MVCL is defined as follows: LMVCL = LV2T + LT2V. (5) 3.3 Salient Frame Proposal (SFP) Mechanism As shown in Figure 1, video-text data inevitably contains misaligned frame-text pairs. Although an attention mechanism has been applied in Eq. (1), the irrelevant and noisy information would still mislead the cross-modal alignment in our model. To alleviate this problem, we thus propose a Salient Frame Proposal (SFP) mechanism for video-language modeling. The core idea of our SFP mechanism is to dynamically filter out misaligned or redundant frames and maintain only a few important frames to represent the video well, which are called as salient frames. Formally, for each video-text pair, we first identify the relevance score R(j|i) between the text Li and the j-th frame Ei,j of the video Vi. Further, we perform language-guided denoising to retain only top-Nsalient salient frames by filtering out the unmatched/redundant frames from each video. Since only video-level annotations are provided, we need to estimate the relevance scores R automatically. As shown in Table 1, we introduce four strategies for estimating relevance scores. (1) SimDot prediction relies on the output of two separate encoders (i.e., frame encoder fe and language encoder f l) to model the relevance score R(j|i) by computing the dot product of the frame embedding fei,j = f e(Ei,j) and the text embedding f li = f l(Li). However, since video-text data is noisy, only utilizing single-modality encoders may result in incorrect salient frames. (2) Momentum prediction improves SimDot prediction by introducing the supervision of momentum encoders (i.e., momentum frame encoder f̂e and momentum language encoder f̂ l), where the momentum frame embedding f̂ei,j = f̂ e(Ei,j) and the momentum text embedding f̂ li = f̂ l(Li). (3) CrossMom prediction considers the frame-text alignment that is directly built on the interaction between one modality encoder and another modality’s momentum encoder. (4) Collaborative prediction combines Momentum prediction and CrossMom prediction for better performance. Although the frame embeddings and text embeddings can be (roughly) aligned through applying video-text contrastive learning in Sec. 3.2, it is vital to precisely establish frame-text alignment for proposing/selecting salient frames. To this end, we introduce the MFCL module below. 3.4 Momentum Frame-Level MSL-Contrastive Learning (MFCL) Module To dynamically filter out the unmatched/redundant frames, we propose to adopt frame-level contrastive learning to directly measure the relevance scores R between video frames and paired text. However, video data often contains misaligned frame-text pairs. Simply applying standard NCE-based contrastive learning would force the misaligned frame-text pairs to be pulled closer, which inevitably has negative effect on learning high-quality frame-text representation. Inspired by MIL-NCE [35], we thus propose a Momentum Frame-Level Multiple Salient-instance Learning (MSL) Contrastive Learning (MFCL) module to assist in alleviating the noise problem. The core idea is to use the salient frames filtered by the SFP Mechanism in each video to form a set of positive candidate pairs, instead of considering each positive pair independently. In this work, we suppose that MFCL and SFP have mutual interdependence so that they can bring boost to each other during training. Similar to MVCL, we additionally maintain a frame-level memory bank Me = {q̂ej′} Nm∗N j′=1 to store frame features, where Nm is the memory bank size, N is the number of sampled frames per video, and q̂ej′ is a stored frame feature vector. Given each text description Li in mini-batch B, we select salient frames filtered by the SFP Mechanism in the paired video Vi to form a set of positive candidate (frame-text) pairs Si and all frame samples in Me to form the negative ones. We then define the text-to-frame contrastive loss as: LT2E =− 1 |B| ∑ (Si,Li)∈B log ∑ f̂eij∈f̂si exp(cos(f li , f̂ e ij)/τ)∑ f̂eij∈f̂si exp(cos(f li , f̂ e ij)/τ)+ ∑ q̂e j′∈M e exp(cos(f li , q̂ e j′)/τ) , (6) where Si = {Ei,j}Nj=1 is the positive frame set of the video Vi, N is the frame sequence length of the video, f li = f l(Li), and f̂si = {f̂eij}Nj=1 = {f̂e(Ei,j)}Nj=1. Similarly, given each positive frame set Si, we define the frame-to-text contrastive loss as: LE2T =− 1 |B| ∑ (Si,Li)∈B log ∑ feij∈fsi exp(cos(feij , f̂ l i )/τ)∑ feij∈fsi exp(cos(feij , f̂ l i )/τ)+ ∑ feij∈fsi ∑ q̂l j′∈M l exp(cos(feij , q̂ l j′)/τ) , (7) where f̂ li = f̂ l(Li) and fsi = {feij}Nj=1 = {fe(Ei,j)}Nj=1 (text memory bank Ml = {q̂lj′} Nm j′=1 is defined in Sec. 3.2). As a result, by combining the text-to-frame and frame-to-text contrastive losses, the objective function for MFCL is given by: LMFCL = LE2T + LT2E. (8) 3.5 Language-Guided Salient Frame Matching (LSFM) Module After obtaining language-guided salient frames, we utilize a multi-modal cross-attention fusion Transformer (see Figure 2) to capture token-level semantic alignment between visual patches and words for better performance (see the design details of this Transformer in the supp. material). Further, we take the [CLS] token embedding outputted by the multi-modal fusion Transformer as the joint representation of a frame-text pair (Vi, Li), and deploy a fully-connected layer to predict the matched probability, which is similar to the sentence pair classification task in BERT’s pre-training phase. The matching loss is defined as: LLSFM = −E(Ei,j ,Li)∼Dsalient logP (yi,j |Ei,j ,Li), (9) where Ei,j denotes j-th frame feature of video Vi, Li denotes text feature, Dsalient is the set of salient frame-text pairs obtained by applying the SFP mechanism to the mini-batch, and yi,j is the ground-truth matching label (0 or 1) of the frame-text pair (Ei,j ,Li). During inference, we use a mean pooling layer to aggregate all salient frame scores as the video-level prediction score. Finally, by combining all the proposed modules for video-language modeling at three levels, we train our LGDN model via minimizing the total objective function: LLGDN = LMVCL + LMFCL + LLSFM. (10) 4 Experiments 4.1 Datasets and Settings Pre-Training Datasets. Due to the restricted computing resources, we follow COTS [32] to pre-train our LGDN on the pure image-text datasets. Our pre-training datasets consists of Conceptual Captions [42], SBU [39], VG [23] and MSCOCO [28], which contains 5.2 million image-text pairs. We additionally apply CC12M [3] (about 2 million URLs are now invalid) for better performance, which accumulates 15.2 million image-text pairs in total. Downstream Datasets. We evaluate our proposed LGDN on four public video-text retrieval datasets: MSR-VTT [50], MSVD [4], DiDeMo [16], and VATEX [46]. To further demonstrate the general applicability of our LGDN, we also carry out experiments on a public video-question answering dataset: MSRVTT-QA [49]. We present the details of these downstream datasets as well as the evaluation metrics for downstream tasks in the supp. material. Implementation Details. Following previous work [25], we sample N = 16 frames per video: each video is equally split into 16 segments and one frame is randomly sampled from each segment. We empirically set the initial learning rate to 1e-5 and adopt AdamW [31] with a weight decay of 0.02 for 5 epochs. In the warm-up stage (first epoch), the model is trained to optimize Eq. (10) without applying SFP mechanism. We also set the other hyper-parameters uniformly as: salient frame numbers Nsalient = 2, mini-batch size |B| = 24, momentum hyper-parameter m = 0.99, temperature τ = 0.07, and queue size Nm = 9, 600. We adopt pre-trained BERT-Base as language encoder and ViT-Base [9] as vision encoder. More details are given in the supp. material. Evaluation Metrics. We adopt two widely-used metrics in cross-modal retrieval: Recall at K (R@K, K= 1, 5, 10), and Median Rank (MdR) / Mean Rank (MnR). R@K means the percentage of correct matching in the K nearest points, and MdR / MnR measures the median / mean rank of target items in the retrieved ranking list. We also report two additional metrics named ‘R@Sum’ and ‘R@Mean’ in our ablation study, which sums/averages all recall metrics for overall evaluation. Following ClipBERT [25], we also report accuracy (Acc) in video-question answering task. 4.2 Ablation Study In this subsection, we conduct comprehensive ablation study to investigate the contributions of different components of our full model. If not specifically indicated, we set N = 16 for global alignment and Nsalient = 2 for token-level alignment as the default setting. Effect of Value Change of Nsalient and N . A common perspective for video/video-language understanding is that more frames per video bring better performance. We thus conduct experiments on the frame number used for token-level alignment in Figure 3(a-b). We sample N = 16 frames from each video and evaluate different variants that use Nsalient ∈ {1, 2, 3, 4, 8, 16} frames. Note that when Nsalient = 16, sampling by our SFP degrades to w/o SFP. It can be observed that utilizing only Nsalient = {2, 3, 4} salient frames filtered by our SFP significantly outperforms utilizing all 16 extracted frames meanwhile enjoying the faster speed (see the green lines). This suggests that our SFP mechanism not only selects correct salient frames but also alleviates the noise problem. To investigate the influence of value change of N on our LGDN, we evenly sample N ∈ {2, 3, 4, 8} frames per video and freeze Nsalient = {1, 2} salient frames. The results in Figure 3(c) indicate that more extracted frames per video are beneficial to the token-level alignment in our LGDN model, as it provides larger candidate set for selecting salient frames. Meanwhile, when N becomes larger (> 4), the performance tends to converge, further demonstrating the redundancy in the videos. Contributions of Each Components. We further demonstrate the contributions of the three objective functions as well as the salient frame proposal (SFP) mechanism used in our full LGDN model in Table 2. We start with the objective function LLSFM (w/o SFP), which means only applying matching loss in token-level alignment without using the SFP mechanism. It can be observed that: (1) LMFCL (and LMVCL) combined with LLSFM (w/o SFP) can bring improvements, suggesting that global alignment is beneficial to token-level alignment (during the training stage). (2) Simply applying the frame-level alignment may cause negative effect while combing with our MSL design brings better results. This demonstrates that our design of LMFCL does help alleviate the noise problem. (3) When the SFP mechanism is added (see LLSFM (w/o SFP) +LMFCL +LMVCL vs. LLSFM +LMFCL +LMVCL), the performance is significantly improved, which clearly shows the effectiveness of our proposed SFP mechanism. (4) For the same trained full LGDN model, combining the global and token-level alignment during inference can bring further improvements. Note that our full LGDN still achieves the state-of-the-art on MSR-VTT even without considering global alignment during inference. 4.3 Comparison to the State-of-the-Arts We first report the text-video retrieval results on MSR-VTT with three data partitions in Table 3. It can be observed that: (1) our LGDN outperforms all previous works by large margins. Particularly, as compared with the most recent model Frozen in Time [2], our LGDN achieves an improvement of 7.9% (38.9% vs. 31.0%) for Text-to-Video R@1 on the MSR-VTT 1k-A test set. (2) Our LGDN also outperforms methods utilizing extra modalities (e.g., motion and audio) or those pre-trained on extremely-large video data (e.g., HowTo100M). (3) When leveraging a much larger pre-training (image-text) dataset, our LGDN (marked with †) achieves significant improvements. To demonstrate the robustness of our model, we also evaluate it on VATEX, MSVD, and Didemo in Tables 4–6, respectively. Due to limited space, only text-to-video retrieval is considered here. For VATEX (Table 4), our LGDN significantly outperforms the state-of-the-art method Support Set which is trained on an order of magnitude more data. Our LGDN still performs the best on MSVD (Table 5) and Didemo (Table 6). Particularly, in the Didemo dataset, each description is annotated with localization information, in other words, annotations may only be aligned with the localized moments, thus causing the noise problem as many methods utilize all frames as the input. Recent works exploit temporal labels of captions to alleviate the noise problem and achieve higher performance. However, even without considering this, our LGDN still largely outperforms the most recent method Frozen [2], further demonstrating the effectiveness of our LGDN. To show the general applicability of our LGDN, we evaluate our LGDN on the VideoQA task in Table 7. Even without utilizing large-scale video datasets devoted to the VideoQA task, our LGDN outperforms all competitors, validating the effectiveness of our LGDN in VideoQA. In addition, to reveal the critical importance of solving the noise issue for video-language modeling, we directly apply the SFP mechanism to the latest model CLIP4Clip [34] in Table 8. We find that applying the SFP mechanism brings boost to Clip4CLIP. The ensemble mechanism further improves the results, indicating that the proposed SFP mechanism is complementary to the baseline. 4.4 Additional Results Applying SFP to Different Frame Sampling Techniques. Note that our SFP mechanism must be combined with a frame sampling technique since we adopt a two-stage sampling strategy in this paper. Thus, we apply our SFP mechanism to three frame sampling techniques: Sparse Sampling, Random Sampling, and Dense Uniform (equally interval sampling). The obtained results on the MSR-VTT 1kA test set are provided in Table 9. It can be observed that our SFP significantly boosts different sampling strategies, further demonstrating the general applicability of our SFP mechanism. Expansion of Relevance Score Estimator. In Sec. 3.3, we have proposed four relevance score estimators for the LSFM module. To find out which is the best, we present the ablation study results for different relevance score estimators in Table 10. We can see a large gap between SFP and random sampling (w/o SFP), directly demonstrating the effectiveness of the proposed SFP mechanism. Meanwhile, both Momentum and CrossMom outperform SimDot, suggesting that introducing momentum encoder is beneficial to relevance score estimation. Collaborative that combines Momentum and CrossMom generally leads to further improvements. Model Capacity. We also provide the detailed comparison to other methods in terms of model capacity and R@SUM (on the MSR-VTT 1kA test set) in Table 11. It can be clearly seen that: (i) When fusion layers are not used (i.e., only global alignment is adopted), our LGDN (global) outperforms the state-of-the-art method Frozen in Time [2], but with much less model parameters. (ii) Our full LGDN performs much better than all the competitors, but its parameter number (215M) is still comparable to that of Frozen in Time (180M) and even significantly smaller than those of the other competitors. These observations suggest that the performance gains obtained by our LGDN is not due to utilizing more model parameters. 4.5 Visualization Results We provide visualization of our LGDN in Figure 4. We uniformly sample 5 frames from each video and provide relevance scores of 5 frames on the left, and the red ones denote salient frames selected by the SFP mechanism. It can be seen that: (1) Although the holistic video is semantically related to the paired text, there still exist noisy frames (e.g., the transition in Frame 1 and Frame 3 of Query7500) and unrelated frames (e.g., in Frame 4 and Frame 5 of Query7544, a man is rolling while the paired text is ‘a car goes racing down the road’). (2) The relevance scores obtained from the SFP mechanism correctly measure the consistency between each frame and the paired text, which indeed helps our LGDN to precisely filter out noisy information for better video-language modeling. 5 Conclusion In this work, we propose a novel Language-Guided Denoising Network (LGDN) for video-language modeling, which can dynamically filter out the unmatched or redundant frames under the language supervision and thus maintain only 2–4 salient frames per video for cross-modal token-level alignment. Extensive experiments on five public datasets show that our LGDN outperforms the state-of-the-arts by large margins. In the future, we will consider aggregating temporal information on salient frames and apply our approach to more challenging video-language tasks (e.g., video grounding). Acknowledgments and Disclosure of Funding This work was supported in part by National Natural Science Foundation of China (61976220 and 61832017), Beijing Outstanding Young Scientist Program (BJJWZYJH012019100020098), and the Research Seed Funds of School of Interdisciplinary Studies, Renmin University of China.
1. What is the focus and contribution of the paper regarding video-language modeling? 2. What are the strengths of the proposed approach, particularly in terms of its organization, noise issue, and salient frame proposal mechanism? 3. What are the weaknesses of the paper, especially regarding the need for more analysis and ablation studies? 4. Do you have any concerns about the necessity of the LGDN model and its comparison with state-of-the-art methods? 5. What are the limitations and potential negative societal impacts of the work that the authors did not discuss?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a Language-Guided Denoising Network (LGDN) for video-language modeling, which can deal with the noisy information in video frames. LGDN dynamically filters out the misaligned or redundant frames under the language supervision and obtains only 2–4 salient frames per video for cross-modal token-level alignment. Extensive experiments on five public datasets show that LGDN outperforms the state-of-the-arts by large margins. Strengths And Weaknesses Strengths: The paper is well organized and easy to read. The noise issue in video-text retrieval task studied by this work is practical, and the proposed salient frame proposal mechanism seems simple and effective. Extensive experiments are performed on video-text datasets and the experimental results are promising. Weaknesses: The paper needs more analysis and ablation studies to support its network architecture and proposals. For example, in-depth comparison between salient frame proposal (SFP) mechanism and sparse sampling or other frame sampling techniques. Also, some experimental results (e.g., Effect of SFP mechanism) in supplementary material can be included in the paper. Questions Table 2 shows the improvement of L M V C L is marginal. Is it necessary to the LGDN model? Can MIL-NCE be used to compute the L M V C L ? The authors claim that the proposed LGDN outperforms the state-of-the-arts by large margins. However, as shown in Table 8, LGDN performs worse than Clip4CLIP. This point needs further discussion. Since the pre-training with CC12M brings significant improvements, it would be nice to compare some SOTA methods under the same pre-training settings. Limitations The authors did not discuss the limitations and potential negative societal impact of their work.
NIPS
Title Generating Videos with Scene Dynamics Abstract We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation. 1 Introduction Understanding object motions and scene dynamics is a core problem in computer vision. For both video recognition tasks (e.g., action classification) and video generation tasks (e.g., future prediction), a model of how scenes transform is needed. However, creating a model of dynamics is challenging because there is a vast number of ways that objects and scenes can change. In this work, we are interested in the fundamental problem of learning how scenes transform with time. We believe investigating this question may yield insight into the design of predictive models for computer vision. However, since annotating this knowledge is both expensive and ambiguous, we instead seek to learn it directly from large amounts of in-the-wild, unlabeled video. Unlabeled video has the advantage that it can be economically acquired at massive scales yet contains rich temporal signals “for free” because frames are temporally coherent. With the goal of capturing some of the temporal knowledge contained in large amounts of unlabeled video, we present an approach that learns to generate tiny videos which have fairly realistic dynamics and motions. To do this, we capitalize on recent advances in generative adversarial networks [9, 31, 4], which we extend to video. We introduce a two-stream generative model that explicitly models the foreground separately from the background, which allows us to enforce that the background is stationary, helping the network to learn which objects move and which do not. Our experiments suggest that our model has started to learn about dynamics. In our generation experiments, we show that our model can generate scenes with plausible motions.1 We conducted a psychophysical study where we asked over a hundred people to compare generated videos, and people preferred videos from our full model more often. Furthermore, by making the model conditional on an input image, our model can sometimes predict a plausible (but “incorrect”) future. In our recognition experiments, we show how our model has learned, without supervision, useful features for human action classification. Moreover, visualizations of the learned representation suggest future generation may be a promising supervisory signal for learning to recognize objects of motion. 1See http://mit.edu/vondrick/tinyvideo for the animated videos. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The primary contribution of this paper is showing how to leverage large amounts of unlabeled video in order to acquire priors about scene dynamics. The secondary contribution is the development of a generative model for video. The remainder of this paper describes these contributions in detail. In section 2, we describe our generative model for video. In section 3, we present several experiments to analyze the generative model. We believe that generative video models can impact many applications, such as in simulations, forecasting, and representation learning. 1.1 Related Work This paper builds upon early work in generative video models [29]. However, previous work has focused mostly on small patches, and evaluated it for video clustering. Here, we develop a generative video model for natural scenes using state-of-the-art adversarial learning methods [9, 31]. Conceptually, our work is related to studies into fundamental roles of time in computer vision [30, 12, 2, 7, 24]. However, here we are interested in generating short videos with realistic temporal semantics, rather than detecting or retrieving them. Our technical approach builds on recent work in generative adversarial networks for image modeling [9, 31, 4, 47, 28], which we extend to video. To our knowledge, there has been relatively little work extensively studying generative adversarial networks for video. Most notably, [22] also uses adversarial networks for video frame prediction. Our framework can generate videos for longer time scales and learn representations of video using unlabeled data. Our work is also related to efforts to predict the future in video [33, 22, 43, 50, 42, 17, 8, 54] as well as concurrent work in future generation [6, 15, 20, 49, 55]. Often these works may be viewed as a generative model conditioned on the past frames. Our work complements these efforts in two ways. Firstly, we explore how to generate videos from scratch (not conditioned on the past). Secondly, while prior work has used generative models in video settings mostly on a single frame, we jointly generate a sequence of frames (32 frames) using spatio-temporal convolutional networks, which may help prevent drifts due to errors accumulating. We leverage approaches for recognizing actions in video with deep networks, but apply them for video generation instead. We use spatio-temporal 3D convolutions to model videos [40], but we use fractionally strided convolutions [51] instead because we are interested in generation. We also use two-streams to model video [34], but apply them for video generation instead of action recognition. However, our approach does not explicitly use optical flow; instead, we expect the network to learn motion features on its own. Finally, this paper is related to a growing body of work that capitalizes on large amounts of unlabeled video for visual recognition tasks [18, 46, 37, 13, 24, 25, 3, 32, 26, 27, 19, 41, 42, 1]. We instead leverage large amounts of unlabeled video for generation. 2 Generative Models for Video In this section, we present a generative model for videos. We propose to use generative adversarial networks [9], which have been shown to have good performance on image generation [31, 4]. 2.1 Review: Generative Adversarial Networks The main idea behind generative adversarial networks [9] is to train two networks: a generator network G tries to produce a video, and a discriminator network D tries to distinguish between “real“ videos and “fake” generated videos. One can train these networks against each other in a min-max game where the generator seeks to maximally fool the discriminator while simultaneously the discriminator seeks to detect which examples are fake: min wG max wD Ex∼px(x) [logD(x;wD)] + Ez∼pz(z) [log (1−D(G(z;wG);wD))] (1) where z is a latent “code” that is often sampled from a simple distribution (such as a normal distribution) and x ∼ px(x) samples from the data distribution. In practice, since we do not know the true distribution of data px(x), we can estimate the expectation by drawing from our dataset. Since we will optimize Equation 1 with gradient based methods (SGD), the two networks G and D can take on any form appropriate for the task as long as they are differentiable with respect to parameters wG and wD. We design a G and D for video. 2.2 Generator Network The input to the generator network is a low-dimensional latent code z ∈ Rd. In most cases, this code can be sampled from a distribution (e.g., Gaussian). Given a code z, we wish to produce a video. We design the architecture of the generator network with a few principles in mind. Firstly, we want the network to be invariant to translations in both space and time. Secondly, we want a low-dimensional z to be able to produce a high-dimensional output (video). Thirdly, we want to assume a stationary camera and take advantage of the the property that usually only objects move. We are interested in modeling object motion, and not the motion of cameras. Moreover, since modeling that the background is stationary is important in video recognition tasks [44], it may be helpful in video generation as well. We explore two different network architectures: One Stream Architecture: We combine spatio-temporal convolutions [14, 40] with fractionally strided convolutions [51, 31] to generate video. Three dimensional convolutions provide spatial and temporal invariance, while fractionally strided convolutions can upsample efficiently in a deep network, allowing z to be low-dimensional. We use an architecture inspired by [31], except extended in time. We use a five layer network of 4× 4× 4 convolutions with a stride of 2, except for the first layer which uses 2× 4× 4 convolutions (time× width× height). We found that these kernel sizes provided an appropriate balance between training speed and quality of generations. Two Stream Architecture: The one stream architecture does not model that the world is stationary and usually only objects move. We experimented with making this behavior explicit in the model. We use an architecture that enforces a static background and moving foreground. We use a two-stream architecture where the generator is governed by the combination: G2(z) = m(z) f(z) + (1−m(z)) b(z). (2) Our intention is that 0 ≥ m(z) ≥ 1 can be viewed as a spatio-temporal mask that selects either the foreground f(z) model or the background model b(z) for each pixel location and timestep. To enforce a background model in the generations, b(z) produces a spatial image that is replicated over time, while f(z) produces a spatio-temporal cuboid masked by m(z). By summing the foreground model with the background model, we can obtain the final generation. Note that is element-wise multiplication, and we replicate singleton dimensions to match its corresponding tensor. During learning, we also add to the objective a small sparsity prior on the mask λ‖m(z)‖1 for λ = 0.1, which we found helps encourage the network to use the background stream. We use fractionally strided convolutional networks for m(z), f(z), and b(z). For f(z), we use the same network as the one-stream architecture, and for b(z) we use a similar generator architecture to [31]. We only use their architecture; we do not initialize with their learned weights. To create the mask m(z), we use a network that shares weights with f(z) except the last layer, which has only one output channel. We use a sigmoid activation function for the mask. We visualize the two-stream architecture in Figure 1. In our experiments, the generator produces 64× 64 videos for 32 frames, which is a little over a second. 2.3 Discriminator Network The discriminator needs to be able to solve two problems: firstly, it must be able to classify realistic scenes from synthetically generated scenes, and secondly, it must be able to recognize realistic motion between frames. We chose to design the discriminator to be able to solve both of these tasks with the same model. We use a five-layer spatio-temporal convolutional network with kernels 4× 4× 4 so that the hidden layers can learn both visual models and motion models. We design the architecture to be reverse of the foreground stream in the generator, replacing fractionally strided convolutions with strided convolutions (to down-sample instead of up-sample), and replacing the last layer to output a binary classification (real or not). 2.4 Learning and Implementation We train the generator and discriminator with stochastic gradient descent. We alternate between maximizing the loss w.r.t. wD and minimizing the loss w.r.t. wG until a fixed number of iterations. All networks are trained from scratch. Our implementation is based off a modified version of [31] in Torch7. We used a more numerically stable implementation of cross entropy loss to prevent overflow. We use the Adam [16] optimizer and a fixed learning rate of 0.0002 and momentum term of 0.5. The latent code has 100 dimensions, which we sample from a normal distribution. We use a batch size of 64. We initialize all weights with zero mean Gaussian noise with standard deviation 0.01. We normalize all videos to be in the range [−1, 1]. We use batch normalization [11] followed by the ReLU activation functions after every layer in the generator, except the output layers, which uses tanh. Following [31], we also use batch normalization in the discriminator except for the first layer and we instead use leaky ReLU [48]. Training typically took several days on a GPU. 3 Experiments We experiment with the generative adversarial network for video (VGAN) on both generation and recognition tasks. We also show several qualitative examples online. 3.1 Unlabeled Video Dataset We use a large amount of unlabeled video to train our model. We downloaded over two million videos from Flickr [39] by querying for popular Flickr tags as well as querying for common English words. From this pool, we created two datasets: Unfiltered Unlabeled Videos: We use these videos directly, without any filtering, for representation learning. The dataset is over 5, 000 hours. Filtered Unlabeled Videos: To evaluate generations, we use the Places2 pre-trained model [53] to automatically filter the videos by scene category. Since image/video generation is a challenging problem, we assembled this dataset to better diagnose strengths and weaknesses of approaches. We experimented with four scene categories: golf course, hospital rooms (babies), beaches, and train station. Stabilization: As we are interested in the movement of objects and not camera shake, we stabilize the camera motion for both datasets. We extract SIFT keypoints [21], use RANSAC to estimate a homography (rotation, translation, scale) between adjacent frames, and warp frames to minimize background motion. When the homography moved out of the frame, we fill in the missing values using the previous frames. If the homography has too large of a re-projection error, we ignore that segment of the video for training, which only happened 3% of the time. The only other pre-processing we do is normalizing the videos to be in the range [−1, 1]. We extract frames at native frame rate (25 fps). We use 32-frame videos of spatial resolution 64× 64. 3.2 Video Generation We evaluate both the one-stream and two-stream generator. We trained a generator for each scene category in our filtered dataset. We perform both a qualitative evaluation as well as a quantitative psychophysical evaluation to measure the perceptual quality of the generated videos. Qualitative Results: We show several examples of the videos generated from our model in Figure 2. We observe that a) the generated scenes tend to be fairly sharp and that b) the motion patterns are generally correct for their respective scene. For example, the beach model tends to produce beaches with crashing waves, the golf model produces people walking on grass, and the train station generations usually show train tracks and a train with windows rapidly moving along it. While the model usually learns to put motion on the right objects, one common failure mode is that the objects lack resolution. For example, the people in the beaches and golf courses are often blobs. Nevertheless, we believe it is promising that our model can generate short motions. We visualize the behavior of the two-stream architecture in Figure 3. Baseline: Since to our knowledge there are no existing large-scale generative models of video ([33] requires an input frame), we develop a simple but reasonable baseline for this task. We train an autoencoder over our data. The encoder is similar to the discriminator network (except producing 100 dimensional code), while the decoder follows the two-stream generator network. Hence, the baseline autoencoder network has a similar number of parameters as our full approach. We then feed examples through the encoder and fit a Gaussian Mixture Model (GMM) with 256 components over the 100 dimensional hidden space. To generate a novel video, we sample from this GMM, and feed the sample through the decoder. Evaluation Metric: We quantitatively evaluate our generation using a psychophysical two-alternative forced choice with workers on Amazon Mechanical Turk. We show a worker two random videos, and ask them “Which video is more realistic?” We collected over 13, 000 opinions across 150 unique workers. We paid workers one cent per comparison, and required workers to historically have a 95% approval rating on MTurk. We experimented with removing bad workers that frequently said real videos were not realistic, but the relative rankings did not change. We designed this experiment following advice from [38], which advocates evaluating generative models for the task at hand. In our case, we are interested in perceptual quality of motion. We consider a model X better than model Y if workers prefer generations from X more than generations from Y. Quantitative Results: Table 1 shows the percentage of times that workers preferred generations from one model over another. Workers consistently prefer videos from the generative adversarial network more than an autoencoder. Additionally, workers show a slight preference for the two-stream architecture, especially in scenes where the background is large (e.g., golf course, beach). Although the one-stream architecture is capable of generating stationary backgrounds, it may be difficult to find this solution, motivating a more explicit architecture. The one-stream architecture generally produces high-frequency temporal flickering in the background. To evaluate whether static frames are better than our generations, we also ask workers to choose between our videos and a static frame, and workers only chose the static frame 38% of the time, suggesting our model produces more realistic motion than static frames on average. Finally, while workers generally can distinguish real videos from generated videos, the workers show the most confusion with our two-stream model compared to baselines, suggesting the two-stream generations may be more realistic on average. 3.3 Video Representation Learning We also experimented with using our model as a way to learn unsupervised representations for video. We train our two-stream model with over 5, 000 hours of unfiltered, unlabeled videos from Flickr. We then fine-tune the discriminator on the task of interest (e.g., action recognition) using a relatively small set of labeled video. To do this, we replace the last layer (which is a binary classifier) with a K-way softmax classifier. We also add dropout [36] to the penultimate layer to reduce overfitting. Action Classification: We evaluated performance on classifying actions on UCF101 [35]. We report accuracy in Figure 4a. Initializing the network with the weights learned from the generative adversarial network outperforms a randomly initialized network, suggesting that it has learned an useful internal representation for video. Interestingly, while a randomly initialized network under-performs hand-crafted STIP features [35], the network initialized with our model significantly outperforms it. We also experimented with training a logistic regression on only the last layer, which performed worse. Finally, our model slightly outperforms another recent unsupervised video representation learning approach [24]. However, our approach uses an order of magnitude fewer parameters, less layers (5 layers vs 8 layers), and low-resolution video. Performance vs Data: We also experimented with varying the amount of labeled training data available to our fine-tuned network. Figure 4b reports performance versus the amount of labeled training data available. As expected, performance increases with more labeled data. The fine-tuned model shows an advantage in low data regimes: even with one eighth of the labeled data, the finetuned model still beats a randomly initialized network. Moreover, Figure 4c plots the relative accuracy gain over the fine-tuned model and the random initialization (fine-tuned performance divided by random initialized performance). This shows that fine-tuning with our model has larger relative gain over random initialization in cases with less labeled data, showing its utility in low-data regimes. 3.4 Future Generation We investigate whether our approach can be used to generate the future of a static image. Specifically, given a static image x0, can we extrapolate a video of possible consequent frames? Encoder: We utilize the same model as our two-stream model, however we must make one change in order to input the static image instead of the latent code. We can do this by attaching a fivelayer convolutional network to the front of the generator which encodes the image into the latent space, similar to a conditional generative adversarial network [23]. The rest of the generator and discriminator networks remain the same. However, we add an additional loss term that minimizes the L1 distance between the input and the first frame of the generated image. We do this so that the generator creates videos consistent with the input image. We train from scratch with the objective: min wG max wD Ex∼px(x) [logD(x;wD)] + Ex0∼px0 (x0) [log (1−D(G(x0;wG);wD))] +Ex0∼px0 (x0) [ λ‖x0 −G0(x0;wG)‖22 ] (3) where x0 is the first frame of the input, G0(·) is the first frame of the generated video, and λ ∈ R is a hyperparameter. The discriminator will try to classify realistic frames and realistic motions as before, while the generator will try to produce a realistic video such that the first frame is reconstructed well. Results: We qualitatively show a few examples of our approach in Figure 5 using held-out testing videos. Although the extrapolations are rarely correct, they often have fairly plausible motions. The most common failure is that the generated video has a scene similar but not identical to the input image, such as by changing colors or dropping/hallucinating objects. The former could be solved by a color histogram normalization in post-processing (which we did not do for simplicity), while we suspect the latter will require building more powerful generative models. The generated videos are usually not the correct video, but we observe that often the motions are plausible. We are not aware of an existing approach that can directly generate multi-frame videos from a single static image. [33, 22] can generate video, but they require multiple input frames and empirically become blurry after extrapolating many frames. [43, 50] can predict optic flow from a single image, but they do not generate several frames of motion and may be susceptible to warping artifacts. We believe this experiment shows an important application of generative video models. Visualizing Representation: Since generating the future requires understanding how objects move, the network may need learn to recognize some objects internally, even though it is not supervised to do so. Figure 6 visualizes some activations of hidden units in the third convolutional layer. While not at all units are semantic, some of the units tend to be selective for objects that are sources of motion, such as people or train tracks. These visualizations suggest that scaling up future generation might be a promising supervisory signal for object recognition and complementary to [27, 5, 46]. Conclusion: Understanding scene dynamics will be crucial for the next generation of computer vision systems. In this work, we explored how to learn some dynamics from large amounts of unlabeled video by capitalizing on adversarial learning methods. Since annotating dynamics is expensive, we believe learning from unlabeled data is a promising direction. While we are still a long way from fully harnessing the potential of unlabeled video, our experiments support that abundant unlabeled video can be lucrative for both learning to generate videos and learning visual representations. Acknowledgements: We thank Yusuf Aytar for dataset discussions. We thank MIT TIG, especially Garrett Wollman, for troubleshooting issues on storing the 26 TB of video. We are grateful for the Torch7 community for answering many questions. NVidia donated GPUs used for this research. This work was supported by NSF grant #1524817 to AT, START program at UMBC to HP, and the Google PhD fellowship to CV.
1. What is the main contribution of the paper regarding video generation? 2. How does the proposed approach differ from existing image-generation methods? 3. What are the strengths and weaknesses of the proposed method, particularly in its ability to generate plausible videos? 4. Why did the authors choose not to use batch norm in the discriminator's first layer? 5. Why do the generator and discriminator use different activation functions? 6. Can the authors provide further explanation or results regarding the reasoning behind these design choices? 7. How effective is the adversarial setup in ensuring the generated videos are realistic?
Review
Review This problem tackles the problem of generating short (with very few frames) tiny (64x64) videos. The approach uses a "deconvolutional" neural network that is split into two parts: the first predicts a foreground "motion", and a mask. The second predicts the motion of the background. The mask is then used to combine the two and obtain the desired generated video. In order to ensure that the videos generated are plausible, the networks are trained in an adversarial setup.Overall this paper is very clearly laid out, and it is very easy to follow. Given that the authors are basing much of their method on existing methods for image generation, the novelty of the method lies in the way they adapted such methods to generate video. It is important to emphasize that I am not familiar with any other papers that attempt to do this (and the authors also didn't seem to be able to find other such papers). The problem with video, unlike images is that low frequencies are not only spanning space, but also time. Therefore, when generating video, typical methods will attempt to generate the temporal low frequencies first, resulting in very jarring outputs. The authors tackled this problem by explicitly decomposing the "background" from the "foreground". The background network's task is to generate the "low frequencies" while the foreground can focus (and will focus) on generating the more interesting parts (the high frequencies -- or motions of the "small" objects). In order to generate "plausible" images, the authors employ an adversarial critic network (or discriminator). In terms of the technical contents, outside the high level ideas, I have some questions for the authors: 1) why did you not use batch norm in the discriminator in the first layer? 2) how come your generator/discriminator don't use the same activation functions (i.e., ReLU vs Leaky ReLU)? In the final version of the paper, please attempt to describe the reasoning for these decisions, and possibly provide results showing a more consistent setup. Regarding the human eval results, I appreciate the honesty -- most of the videos don't seem "real", and some are rather jarring (especially the baby videos).
NIPS
Title Generating Videos with Scene Dynamics Abstract We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation. 1 Introduction Understanding object motions and scene dynamics is a core problem in computer vision. For both video recognition tasks (e.g., action classification) and video generation tasks (e.g., future prediction), a model of how scenes transform is needed. However, creating a model of dynamics is challenging because there is a vast number of ways that objects and scenes can change. In this work, we are interested in the fundamental problem of learning how scenes transform with time. We believe investigating this question may yield insight into the design of predictive models for computer vision. However, since annotating this knowledge is both expensive and ambiguous, we instead seek to learn it directly from large amounts of in-the-wild, unlabeled video. Unlabeled video has the advantage that it can be economically acquired at massive scales yet contains rich temporal signals “for free” because frames are temporally coherent. With the goal of capturing some of the temporal knowledge contained in large amounts of unlabeled video, we present an approach that learns to generate tiny videos which have fairly realistic dynamics and motions. To do this, we capitalize on recent advances in generative adversarial networks [9, 31, 4], which we extend to video. We introduce a two-stream generative model that explicitly models the foreground separately from the background, which allows us to enforce that the background is stationary, helping the network to learn which objects move and which do not. Our experiments suggest that our model has started to learn about dynamics. In our generation experiments, we show that our model can generate scenes with plausible motions.1 We conducted a psychophysical study where we asked over a hundred people to compare generated videos, and people preferred videos from our full model more often. Furthermore, by making the model conditional on an input image, our model can sometimes predict a plausible (but “incorrect”) future. In our recognition experiments, we show how our model has learned, without supervision, useful features for human action classification. Moreover, visualizations of the learned representation suggest future generation may be a promising supervisory signal for learning to recognize objects of motion. 1See http://mit.edu/vondrick/tinyvideo for the animated videos. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The primary contribution of this paper is showing how to leverage large amounts of unlabeled video in order to acquire priors about scene dynamics. The secondary contribution is the development of a generative model for video. The remainder of this paper describes these contributions in detail. In section 2, we describe our generative model for video. In section 3, we present several experiments to analyze the generative model. We believe that generative video models can impact many applications, such as in simulations, forecasting, and representation learning. 1.1 Related Work This paper builds upon early work in generative video models [29]. However, previous work has focused mostly on small patches, and evaluated it for video clustering. Here, we develop a generative video model for natural scenes using state-of-the-art adversarial learning methods [9, 31]. Conceptually, our work is related to studies into fundamental roles of time in computer vision [30, 12, 2, 7, 24]. However, here we are interested in generating short videos with realistic temporal semantics, rather than detecting or retrieving them. Our technical approach builds on recent work in generative adversarial networks for image modeling [9, 31, 4, 47, 28], which we extend to video. To our knowledge, there has been relatively little work extensively studying generative adversarial networks for video. Most notably, [22] also uses adversarial networks for video frame prediction. Our framework can generate videos for longer time scales and learn representations of video using unlabeled data. Our work is also related to efforts to predict the future in video [33, 22, 43, 50, 42, 17, 8, 54] as well as concurrent work in future generation [6, 15, 20, 49, 55]. Often these works may be viewed as a generative model conditioned on the past frames. Our work complements these efforts in two ways. Firstly, we explore how to generate videos from scratch (not conditioned on the past). Secondly, while prior work has used generative models in video settings mostly on a single frame, we jointly generate a sequence of frames (32 frames) using spatio-temporal convolutional networks, which may help prevent drifts due to errors accumulating. We leverage approaches for recognizing actions in video with deep networks, but apply them for video generation instead. We use spatio-temporal 3D convolutions to model videos [40], but we use fractionally strided convolutions [51] instead because we are interested in generation. We also use two-streams to model video [34], but apply them for video generation instead of action recognition. However, our approach does not explicitly use optical flow; instead, we expect the network to learn motion features on its own. Finally, this paper is related to a growing body of work that capitalizes on large amounts of unlabeled video for visual recognition tasks [18, 46, 37, 13, 24, 25, 3, 32, 26, 27, 19, 41, 42, 1]. We instead leverage large amounts of unlabeled video for generation. 2 Generative Models for Video In this section, we present a generative model for videos. We propose to use generative adversarial networks [9], which have been shown to have good performance on image generation [31, 4]. 2.1 Review: Generative Adversarial Networks The main idea behind generative adversarial networks [9] is to train two networks: a generator network G tries to produce a video, and a discriminator network D tries to distinguish between “real“ videos and “fake” generated videos. One can train these networks against each other in a min-max game where the generator seeks to maximally fool the discriminator while simultaneously the discriminator seeks to detect which examples are fake: min wG max wD Ex∼px(x) [logD(x;wD)] + Ez∼pz(z) [log (1−D(G(z;wG);wD))] (1) where z is a latent “code” that is often sampled from a simple distribution (such as a normal distribution) and x ∼ px(x) samples from the data distribution. In practice, since we do not know the true distribution of data px(x), we can estimate the expectation by drawing from our dataset. Since we will optimize Equation 1 with gradient based methods (SGD), the two networks G and D can take on any form appropriate for the task as long as they are differentiable with respect to parameters wG and wD. We design a G and D for video. 2.2 Generator Network The input to the generator network is a low-dimensional latent code z ∈ Rd. In most cases, this code can be sampled from a distribution (e.g., Gaussian). Given a code z, we wish to produce a video. We design the architecture of the generator network with a few principles in mind. Firstly, we want the network to be invariant to translations in both space and time. Secondly, we want a low-dimensional z to be able to produce a high-dimensional output (video). Thirdly, we want to assume a stationary camera and take advantage of the the property that usually only objects move. We are interested in modeling object motion, and not the motion of cameras. Moreover, since modeling that the background is stationary is important in video recognition tasks [44], it may be helpful in video generation as well. We explore two different network architectures: One Stream Architecture: We combine spatio-temporal convolutions [14, 40] with fractionally strided convolutions [51, 31] to generate video. Three dimensional convolutions provide spatial and temporal invariance, while fractionally strided convolutions can upsample efficiently in a deep network, allowing z to be low-dimensional. We use an architecture inspired by [31], except extended in time. We use a five layer network of 4× 4× 4 convolutions with a stride of 2, except for the first layer which uses 2× 4× 4 convolutions (time× width× height). We found that these kernel sizes provided an appropriate balance between training speed and quality of generations. Two Stream Architecture: The one stream architecture does not model that the world is stationary and usually only objects move. We experimented with making this behavior explicit in the model. We use an architecture that enforces a static background and moving foreground. We use a two-stream architecture where the generator is governed by the combination: G2(z) = m(z) f(z) + (1−m(z)) b(z). (2) Our intention is that 0 ≥ m(z) ≥ 1 can be viewed as a spatio-temporal mask that selects either the foreground f(z) model or the background model b(z) for each pixel location and timestep. To enforce a background model in the generations, b(z) produces a spatial image that is replicated over time, while f(z) produces a spatio-temporal cuboid masked by m(z). By summing the foreground model with the background model, we can obtain the final generation. Note that is element-wise multiplication, and we replicate singleton dimensions to match its corresponding tensor. During learning, we also add to the objective a small sparsity prior on the mask λ‖m(z)‖1 for λ = 0.1, which we found helps encourage the network to use the background stream. We use fractionally strided convolutional networks for m(z), f(z), and b(z). For f(z), we use the same network as the one-stream architecture, and for b(z) we use a similar generator architecture to [31]. We only use their architecture; we do not initialize with their learned weights. To create the mask m(z), we use a network that shares weights with f(z) except the last layer, which has only one output channel. We use a sigmoid activation function for the mask. We visualize the two-stream architecture in Figure 1. In our experiments, the generator produces 64× 64 videos for 32 frames, which is a little over a second. 2.3 Discriminator Network The discriminator needs to be able to solve two problems: firstly, it must be able to classify realistic scenes from synthetically generated scenes, and secondly, it must be able to recognize realistic motion between frames. We chose to design the discriminator to be able to solve both of these tasks with the same model. We use a five-layer spatio-temporal convolutional network with kernels 4× 4× 4 so that the hidden layers can learn both visual models and motion models. We design the architecture to be reverse of the foreground stream in the generator, replacing fractionally strided convolutions with strided convolutions (to down-sample instead of up-sample), and replacing the last layer to output a binary classification (real or not). 2.4 Learning and Implementation We train the generator and discriminator with stochastic gradient descent. We alternate between maximizing the loss w.r.t. wD and minimizing the loss w.r.t. wG until a fixed number of iterations. All networks are trained from scratch. Our implementation is based off a modified version of [31] in Torch7. We used a more numerically stable implementation of cross entropy loss to prevent overflow. We use the Adam [16] optimizer and a fixed learning rate of 0.0002 and momentum term of 0.5. The latent code has 100 dimensions, which we sample from a normal distribution. We use a batch size of 64. We initialize all weights with zero mean Gaussian noise with standard deviation 0.01. We normalize all videos to be in the range [−1, 1]. We use batch normalization [11] followed by the ReLU activation functions after every layer in the generator, except the output layers, which uses tanh. Following [31], we also use batch normalization in the discriminator except for the first layer and we instead use leaky ReLU [48]. Training typically took several days on a GPU. 3 Experiments We experiment with the generative adversarial network for video (VGAN) on both generation and recognition tasks. We also show several qualitative examples online. 3.1 Unlabeled Video Dataset We use a large amount of unlabeled video to train our model. We downloaded over two million videos from Flickr [39] by querying for popular Flickr tags as well as querying for common English words. From this pool, we created two datasets: Unfiltered Unlabeled Videos: We use these videos directly, without any filtering, for representation learning. The dataset is over 5, 000 hours. Filtered Unlabeled Videos: To evaluate generations, we use the Places2 pre-trained model [53] to automatically filter the videos by scene category. Since image/video generation is a challenging problem, we assembled this dataset to better diagnose strengths and weaknesses of approaches. We experimented with four scene categories: golf course, hospital rooms (babies), beaches, and train station. Stabilization: As we are interested in the movement of objects and not camera shake, we stabilize the camera motion for both datasets. We extract SIFT keypoints [21], use RANSAC to estimate a homography (rotation, translation, scale) between adjacent frames, and warp frames to minimize background motion. When the homography moved out of the frame, we fill in the missing values using the previous frames. If the homography has too large of a re-projection error, we ignore that segment of the video for training, which only happened 3% of the time. The only other pre-processing we do is normalizing the videos to be in the range [−1, 1]. We extract frames at native frame rate (25 fps). We use 32-frame videos of spatial resolution 64× 64. 3.2 Video Generation We evaluate both the one-stream and two-stream generator. We trained a generator for each scene category in our filtered dataset. We perform both a qualitative evaluation as well as a quantitative psychophysical evaluation to measure the perceptual quality of the generated videos. Qualitative Results: We show several examples of the videos generated from our model in Figure 2. We observe that a) the generated scenes tend to be fairly sharp and that b) the motion patterns are generally correct for their respective scene. For example, the beach model tends to produce beaches with crashing waves, the golf model produces people walking on grass, and the train station generations usually show train tracks and a train with windows rapidly moving along it. While the model usually learns to put motion on the right objects, one common failure mode is that the objects lack resolution. For example, the people in the beaches and golf courses are often blobs. Nevertheless, we believe it is promising that our model can generate short motions. We visualize the behavior of the two-stream architecture in Figure 3. Baseline: Since to our knowledge there are no existing large-scale generative models of video ([33] requires an input frame), we develop a simple but reasonable baseline for this task. We train an autoencoder over our data. The encoder is similar to the discriminator network (except producing 100 dimensional code), while the decoder follows the two-stream generator network. Hence, the baseline autoencoder network has a similar number of parameters as our full approach. We then feed examples through the encoder and fit a Gaussian Mixture Model (GMM) with 256 components over the 100 dimensional hidden space. To generate a novel video, we sample from this GMM, and feed the sample through the decoder. Evaluation Metric: We quantitatively evaluate our generation using a psychophysical two-alternative forced choice with workers on Amazon Mechanical Turk. We show a worker two random videos, and ask them “Which video is more realistic?” We collected over 13, 000 opinions across 150 unique workers. We paid workers one cent per comparison, and required workers to historically have a 95% approval rating on MTurk. We experimented with removing bad workers that frequently said real videos were not realistic, but the relative rankings did not change. We designed this experiment following advice from [38], which advocates evaluating generative models for the task at hand. In our case, we are interested in perceptual quality of motion. We consider a model X better than model Y if workers prefer generations from X more than generations from Y. Quantitative Results: Table 1 shows the percentage of times that workers preferred generations from one model over another. Workers consistently prefer videos from the generative adversarial network more than an autoencoder. Additionally, workers show a slight preference for the two-stream architecture, especially in scenes where the background is large (e.g., golf course, beach). Although the one-stream architecture is capable of generating stationary backgrounds, it may be difficult to find this solution, motivating a more explicit architecture. The one-stream architecture generally produces high-frequency temporal flickering in the background. To evaluate whether static frames are better than our generations, we also ask workers to choose between our videos and a static frame, and workers only chose the static frame 38% of the time, suggesting our model produces more realistic motion than static frames on average. Finally, while workers generally can distinguish real videos from generated videos, the workers show the most confusion with our two-stream model compared to baselines, suggesting the two-stream generations may be more realistic on average. 3.3 Video Representation Learning We also experimented with using our model as a way to learn unsupervised representations for video. We train our two-stream model with over 5, 000 hours of unfiltered, unlabeled videos from Flickr. We then fine-tune the discriminator on the task of interest (e.g., action recognition) using a relatively small set of labeled video. To do this, we replace the last layer (which is a binary classifier) with a K-way softmax classifier. We also add dropout [36] to the penultimate layer to reduce overfitting. Action Classification: We evaluated performance on classifying actions on UCF101 [35]. We report accuracy in Figure 4a. Initializing the network with the weights learned from the generative adversarial network outperforms a randomly initialized network, suggesting that it has learned an useful internal representation for video. Interestingly, while a randomly initialized network under-performs hand-crafted STIP features [35], the network initialized with our model significantly outperforms it. We also experimented with training a logistic regression on only the last layer, which performed worse. Finally, our model slightly outperforms another recent unsupervised video representation learning approach [24]. However, our approach uses an order of magnitude fewer parameters, less layers (5 layers vs 8 layers), and low-resolution video. Performance vs Data: We also experimented with varying the amount of labeled training data available to our fine-tuned network. Figure 4b reports performance versus the amount of labeled training data available. As expected, performance increases with more labeled data. The fine-tuned model shows an advantage in low data regimes: even with one eighth of the labeled data, the finetuned model still beats a randomly initialized network. Moreover, Figure 4c plots the relative accuracy gain over the fine-tuned model and the random initialization (fine-tuned performance divided by random initialized performance). This shows that fine-tuning with our model has larger relative gain over random initialization in cases with less labeled data, showing its utility in low-data regimes. 3.4 Future Generation We investigate whether our approach can be used to generate the future of a static image. Specifically, given a static image x0, can we extrapolate a video of possible consequent frames? Encoder: We utilize the same model as our two-stream model, however we must make one change in order to input the static image instead of the latent code. We can do this by attaching a fivelayer convolutional network to the front of the generator which encodes the image into the latent space, similar to a conditional generative adversarial network [23]. The rest of the generator and discriminator networks remain the same. However, we add an additional loss term that minimizes the L1 distance between the input and the first frame of the generated image. We do this so that the generator creates videos consistent with the input image. We train from scratch with the objective: min wG max wD Ex∼px(x) [logD(x;wD)] + Ex0∼px0 (x0) [log (1−D(G(x0;wG);wD))] +Ex0∼px0 (x0) [ λ‖x0 −G0(x0;wG)‖22 ] (3) where x0 is the first frame of the input, G0(·) is the first frame of the generated video, and λ ∈ R is a hyperparameter. The discriminator will try to classify realistic frames and realistic motions as before, while the generator will try to produce a realistic video such that the first frame is reconstructed well. Results: We qualitatively show a few examples of our approach in Figure 5 using held-out testing videos. Although the extrapolations are rarely correct, they often have fairly plausible motions. The most common failure is that the generated video has a scene similar but not identical to the input image, such as by changing colors or dropping/hallucinating objects. The former could be solved by a color histogram normalization in post-processing (which we did not do for simplicity), while we suspect the latter will require building more powerful generative models. The generated videos are usually not the correct video, but we observe that often the motions are plausible. We are not aware of an existing approach that can directly generate multi-frame videos from a single static image. [33, 22] can generate video, but they require multiple input frames and empirically become blurry after extrapolating many frames. [43, 50] can predict optic flow from a single image, but they do not generate several frames of motion and may be susceptible to warping artifacts. We believe this experiment shows an important application of generative video models. Visualizing Representation: Since generating the future requires understanding how objects move, the network may need learn to recognize some objects internally, even though it is not supervised to do so. Figure 6 visualizes some activations of hidden units in the third convolutional layer. While not at all units are semantic, some of the units tend to be selective for objects that are sources of motion, such as people or train tracks. These visualizations suggest that scaling up future generation might be a promising supervisory signal for object recognition and complementary to [27, 5, 46]. Conclusion: Understanding scene dynamics will be crucial for the next generation of computer vision systems. In this work, we explored how to learn some dynamics from large amounts of unlabeled video by capitalizing on adversarial learning methods. Since annotating dynamics is expensive, we believe learning from unlabeled data is a promising direction. While we are still a long way from fully harnessing the potential of unlabeled video, our experiments support that abundant unlabeled video can be lucrative for both learning to generate videos and learning visual representations. Acknowledgements: We thank Yusuf Aytar for dataset discussions. We thank MIT TIG, especially Garrett Wollman, for troubleshooting issues on storing the 26 TB of video. We are grateful for the Torch7 community for answering many questions. NVidia donated GPUs used for this research. This work was supported by NSF grant #1524817 to AT, START program at UMBC to HP, and the Google PhD fellowship to CV.
1. What is the focus of the paper in terms of the addressed problem and its significance? 2. What are the strengths of the proposed approach, particularly regarding its novelty and experimental comprehensiveness? 3. What are the weaknesses of the paper, especially regarding its incremental nature and performance compared to state-of-the-art methods? 4. Do you have any questions about the implementation or results, such as the size of the datasets or the appearance of the generated videos?
Review
Review The paper describes a generative adversarial convolutional neural network for video (a block of 32 frames x 64x64 pixels). The architecture is divided into two streams one for static background and one for the moving foreground that are combined together using a mask that is also generated. The loss is a standard generative adversarial loss. The paper shows video generation experiments on four scene categories with qualitative results and quantitative user-study evaluating the realism of the generated videos. In addition, the paper uses trained representation on 5000 hours of Flickr videos for action classification demonstrating that the resulting representation serves as a better initialization for training action classification models.Strengths: - The addressed problem of generative models of videos is interesting, timely and difficult. - Novelty: The proposed two stream architecture (static background / moving foreground + mask) for video generation is novel (albeit somewhat incremental over the previous static image generation GANs). Nevertheless, I like the proposed extension. - Experiments: The set of experiments is fairly comprehensive (generation, classification, user study). The results are encouraging, but but the visual quality of the generated results is quite poor and the action recognition results are much below the current state-of-the-art on the considered UCF dataset. Weaknesses: - The paper is somewhat incremental. The developed model is a fairly straighforward extension of the GAN for static images. - The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF dataset, which uses more complex (deeper, also processing optic flow) architectures. Questions: - What is the size of the beach/golf course/train station/hospital datasets? - How do the video generation results from the network trained on 5000 hours of video look? Summary: While somewhat incremental, the paper seems to have enough novelty for a poster. The visual results encouraging but with many artifacts. The action classification results demonstrate benefits of the learnt representation compared with random weights but are significantly below state-of-the-art results on the considered dataset.
NIPS
Title Generating Videos with Scene Dynamics Abstract We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation. 1 Introduction Understanding object motions and scene dynamics is a core problem in computer vision. For both video recognition tasks (e.g., action classification) and video generation tasks (e.g., future prediction), a model of how scenes transform is needed. However, creating a model of dynamics is challenging because there is a vast number of ways that objects and scenes can change. In this work, we are interested in the fundamental problem of learning how scenes transform with time. We believe investigating this question may yield insight into the design of predictive models for computer vision. However, since annotating this knowledge is both expensive and ambiguous, we instead seek to learn it directly from large amounts of in-the-wild, unlabeled video. Unlabeled video has the advantage that it can be economically acquired at massive scales yet contains rich temporal signals “for free” because frames are temporally coherent. With the goal of capturing some of the temporal knowledge contained in large amounts of unlabeled video, we present an approach that learns to generate tiny videos which have fairly realistic dynamics and motions. To do this, we capitalize on recent advances in generative adversarial networks [9, 31, 4], which we extend to video. We introduce a two-stream generative model that explicitly models the foreground separately from the background, which allows us to enforce that the background is stationary, helping the network to learn which objects move and which do not. Our experiments suggest that our model has started to learn about dynamics. In our generation experiments, we show that our model can generate scenes with plausible motions.1 We conducted a psychophysical study where we asked over a hundred people to compare generated videos, and people preferred videos from our full model more often. Furthermore, by making the model conditional on an input image, our model can sometimes predict a plausible (but “incorrect”) future. In our recognition experiments, we show how our model has learned, without supervision, useful features for human action classification. Moreover, visualizations of the learned representation suggest future generation may be a promising supervisory signal for learning to recognize objects of motion. 1See http://mit.edu/vondrick/tinyvideo for the animated videos. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The primary contribution of this paper is showing how to leverage large amounts of unlabeled video in order to acquire priors about scene dynamics. The secondary contribution is the development of a generative model for video. The remainder of this paper describes these contributions in detail. In section 2, we describe our generative model for video. In section 3, we present several experiments to analyze the generative model. We believe that generative video models can impact many applications, such as in simulations, forecasting, and representation learning. 1.1 Related Work This paper builds upon early work in generative video models [29]. However, previous work has focused mostly on small patches, and evaluated it for video clustering. Here, we develop a generative video model for natural scenes using state-of-the-art adversarial learning methods [9, 31]. Conceptually, our work is related to studies into fundamental roles of time in computer vision [30, 12, 2, 7, 24]. However, here we are interested in generating short videos with realistic temporal semantics, rather than detecting or retrieving them. Our technical approach builds on recent work in generative adversarial networks for image modeling [9, 31, 4, 47, 28], which we extend to video. To our knowledge, there has been relatively little work extensively studying generative adversarial networks for video. Most notably, [22] also uses adversarial networks for video frame prediction. Our framework can generate videos for longer time scales and learn representations of video using unlabeled data. Our work is also related to efforts to predict the future in video [33, 22, 43, 50, 42, 17, 8, 54] as well as concurrent work in future generation [6, 15, 20, 49, 55]. Often these works may be viewed as a generative model conditioned on the past frames. Our work complements these efforts in two ways. Firstly, we explore how to generate videos from scratch (not conditioned on the past). Secondly, while prior work has used generative models in video settings mostly on a single frame, we jointly generate a sequence of frames (32 frames) using spatio-temporal convolutional networks, which may help prevent drifts due to errors accumulating. We leverage approaches for recognizing actions in video with deep networks, but apply them for video generation instead. We use spatio-temporal 3D convolutions to model videos [40], but we use fractionally strided convolutions [51] instead because we are interested in generation. We also use two-streams to model video [34], but apply them for video generation instead of action recognition. However, our approach does not explicitly use optical flow; instead, we expect the network to learn motion features on its own. Finally, this paper is related to a growing body of work that capitalizes on large amounts of unlabeled video for visual recognition tasks [18, 46, 37, 13, 24, 25, 3, 32, 26, 27, 19, 41, 42, 1]. We instead leverage large amounts of unlabeled video for generation. 2 Generative Models for Video In this section, we present a generative model for videos. We propose to use generative adversarial networks [9], which have been shown to have good performance on image generation [31, 4]. 2.1 Review: Generative Adversarial Networks The main idea behind generative adversarial networks [9] is to train two networks: a generator network G tries to produce a video, and a discriminator network D tries to distinguish between “real“ videos and “fake” generated videos. One can train these networks against each other in a min-max game where the generator seeks to maximally fool the discriminator while simultaneously the discriminator seeks to detect which examples are fake: min wG max wD Ex∼px(x) [logD(x;wD)] + Ez∼pz(z) [log (1−D(G(z;wG);wD))] (1) where z is a latent “code” that is often sampled from a simple distribution (such as a normal distribution) and x ∼ px(x) samples from the data distribution. In practice, since we do not know the true distribution of data px(x), we can estimate the expectation by drawing from our dataset. Since we will optimize Equation 1 with gradient based methods (SGD), the two networks G and D can take on any form appropriate for the task as long as they are differentiable with respect to parameters wG and wD. We design a G and D for video. 2.2 Generator Network The input to the generator network is a low-dimensional latent code z ∈ Rd. In most cases, this code can be sampled from a distribution (e.g., Gaussian). Given a code z, we wish to produce a video. We design the architecture of the generator network with a few principles in mind. Firstly, we want the network to be invariant to translations in both space and time. Secondly, we want a low-dimensional z to be able to produce a high-dimensional output (video). Thirdly, we want to assume a stationary camera and take advantage of the the property that usually only objects move. We are interested in modeling object motion, and not the motion of cameras. Moreover, since modeling that the background is stationary is important in video recognition tasks [44], it may be helpful in video generation as well. We explore two different network architectures: One Stream Architecture: We combine spatio-temporal convolutions [14, 40] with fractionally strided convolutions [51, 31] to generate video. Three dimensional convolutions provide spatial and temporal invariance, while fractionally strided convolutions can upsample efficiently in a deep network, allowing z to be low-dimensional. We use an architecture inspired by [31], except extended in time. We use a five layer network of 4× 4× 4 convolutions with a stride of 2, except for the first layer which uses 2× 4× 4 convolutions (time× width× height). We found that these kernel sizes provided an appropriate balance between training speed and quality of generations. Two Stream Architecture: The one stream architecture does not model that the world is stationary and usually only objects move. We experimented with making this behavior explicit in the model. We use an architecture that enforces a static background and moving foreground. We use a two-stream architecture where the generator is governed by the combination: G2(z) = m(z) f(z) + (1−m(z)) b(z). (2) Our intention is that 0 ≥ m(z) ≥ 1 can be viewed as a spatio-temporal mask that selects either the foreground f(z) model or the background model b(z) for each pixel location and timestep. To enforce a background model in the generations, b(z) produces a spatial image that is replicated over time, while f(z) produces a spatio-temporal cuboid masked by m(z). By summing the foreground model with the background model, we can obtain the final generation. Note that is element-wise multiplication, and we replicate singleton dimensions to match its corresponding tensor. During learning, we also add to the objective a small sparsity prior on the mask λ‖m(z)‖1 for λ = 0.1, which we found helps encourage the network to use the background stream. We use fractionally strided convolutional networks for m(z), f(z), and b(z). For f(z), we use the same network as the one-stream architecture, and for b(z) we use a similar generator architecture to [31]. We only use their architecture; we do not initialize with their learned weights. To create the mask m(z), we use a network that shares weights with f(z) except the last layer, which has only one output channel. We use a sigmoid activation function for the mask. We visualize the two-stream architecture in Figure 1. In our experiments, the generator produces 64× 64 videos for 32 frames, which is a little over a second. 2.3 Discriminator Network The discriminator needs to be able to solve two problems: firstly, it must be able to classify realistic scenes from synthetically generated scenes, and secondly, it must be able to recognize realistic motion between frames. We chose to design the discriminator to be able to solve both of these tasks with the same model. We use a five-layer spatio-temporal convolutional network with kernels 4× 4× 4 so that the hidden layers can learn both visual models and motion models. We design the architecture to be reverse of the foreground stream in the generator, replacing fractionally strided convolutions with strided convolutions (to down-sample instead of up-sample), and replacing the last layer to output a binary classification (real or not). 2.4 Learning and Implementation We train the generator and discriminator with stochastic gradient descent. We alternate between maximizing the loss w.r.t. wD and minimizing the loss w.r.t. wG until a fixed number of iterations. All networks are trained from scratch. Our implementation is based off a modified version of [31] in Torch7. We used a more numerically stable implementation of cross entropy loss to prevent overflow. We use the Adam [16] optimizer and a fixed learning rate of 0.0002 and momentum term of 0.5. The latent code has 100 dimensions, which we sample from a normal distribution. We use a batch size of 64. We initialize all weights with zero mean Gaussian noise with standard deviation 0.01. We normalize all videos to be in the range [−1, 1]. We use batch normalization [11] followed by the ReLU activation functions after every layer in the generator, except the output layers, which uses tanh. Following [31], we also use batch normalization in the discriminator except for the first layer and we instead use leaky ReLU [48]. Training typically took several days on a GPU. 3 Experiments We experiment with the generative adversarial network for video (VGAN) on both generation and recognition tasks. We also show several qualitative examples online. 3.1 Unlabeled Video Dataset We use a large amount of unlabeled video to train our model. We downloaded over two million videos from Flickr [39] by querying for popular Flickr tags as well as querying for common English words. From this pool, we created two datasets: Unfiltered Unlabeled Videos: We use these videos directly, without any filtering, for representation learning. The dataset is over 5, 000 hours. Filtered Unlabeled Videos: To evaluate generations, we use the Places2 pre-trained model [53] to automatically filter the videos by scene category. Since image/video generation is a challenging problem, we assembled this dataset to better diagnose strengths and weaknesses of approaches. We experimented with four scene categories: golf course, hospital rooms (babies), beaches, and train station. Stabilization: As we are interested in the movement of objects and not camera shake, we stabilize the camera motion for both datasets. We extract SIFT keypoints [21], use RANSAC to estimate a homography (rotation, translation, scale) between adjacent frames, and warp frames to minimize background motion. When the homography moved out of the frame, we fill in the missing values using the previous frames. If the homography has too large of a re-projection error, we ignore that segment of the video for training, which only happened 3% of the time. The only other pre-processing we do is normalizing the videos to be in the range [−1, 1]. We extract frames at native frame rate (25 fps). We use 32-frame videos of spatial resolution 64× 64. 3.2 Video Generation We evaluate both the one-stream and two-stream generator. We trained a generator for each scene category in our filtered dataset. We perform both a qualitative evaluation as well as a quantitative psychophysical evaluation to measure the perceptual quality of the generated videos. Qualitative Results: We show several examples of the videos generated from our model in Figure 2. We observe that a) the generated scenes tend to be fairly sharp and that b) the motion patterns are generally correct for their respective scene. For example, the beach model tends to produce beaches with crashing waves, the golf model produces people walking on grass, and the train station generations usually show train tracks and a train with windows rapidly moving along it. While the model usually learns to put motion on the right objects, one common failure mode is that the objects lack resolution. For example, the people in the beaches and golf courses are often blobs. Nevertheless, we believe it is promising that our model can generate short motions. We visualize the behavior of the two-stream architecture in Figure 3. Baseline: Since to our knowledge there are no existing large-scale generative models of video ([33] requires an input frame), we develop a simple but reasonable baseline for this task. We train an autoencoder over our data. The encoder is similar to the discriminator network (except producing 100 dimensional code), while the decoder follows the two-stream generator network. Hence, the baseline autoencoder network has a similar number of parameters as our full approach. We then feed examples through the encoder and fit a Gaussian Mixture Model (GMM) with 256 components over the 100 dimensional hidden space. To generate a novel video, we sample from this GMM, and feed the sample through the decoder. Evaluation Metric: We quantitatively evaluate our generation using a psychophysical two-alternative forced choice with workers on Amazon Mechanical Turk. We show a worker two random videos, and ask them “Which video is more realistic?” We collected over 13, 000 opinions across 150 unique workers. We paid workers one cent per comparison, and required workers to historically have a 95% approval rating on MTurk. We experimented with removing bad workers that frequently said real videos were not realistic, but the relative rankings did not change. We designed this experiment following advice from [38], which advocates evaluating generative models for the task at hand. In our case, we are interested in perceptual quality of motion. We consider a model X better than model Y if workers prefer generations from X more than generations from Y. Quantitative Results: Table 1 shows the percentage of times that workers preferred generations from one model over another. Workers consistently prefer videos from the generative adversarial network more than an autoencoder. Additionally, workers show a slight preference for the two-stream architecture, especially in scenes where the background is large (e.g., golf course, beach). Although the one-stream architecture is capable of generating stationary backgrounds, it may be difficult to find this solution, motivating a more explicit architecture. The one-stream architecture generally produces high-frequency temporal flickering in the background. To evaluate whether static frames are better than our generations, we also ask workers to choose between our videos and a static frame, and workers only chose the static frame 38% of the time, suggesting our model produces more realistic motion than static frames on average. Finally, while workers generally can distinguish real videos from generated videos, the workers show the most confusion with our two-stream model compared to baselines, suggesting the two-stream generations may be more realistic on average. 3.3 Video Representation Learning We also experimented with using our model as a way to learn unsupervised representations for video. We train our two-stream model with over 5, 000 hours of unfiltered, unlabeled videos from Flickr. We then fine-tune the discriminator on the task of interest (e.g., action recognition) using a relatively small set of labeled video. To do this, we replace the last layer (which is a binary classifier) with a K-way softmax classifier. We also add dropout [36] to the penultimate layer to reduce overfitting. Action Classification: We evaluated performance on classifying actions on UCF101 [35]. We report accuracy in Figure 4a. Initializing the network with the weights learned from the generative adversarial network outperforms a randomly initialized network, suggesting that it has learned an useful internal representation for video. Interestingly, while a randomly initialized network under-performs hand-crafted STIP features [35], the network initialized with our model significantly outperforms it. We also experimented with training a logistic regression on only the last layer, which performed worse. Finally, our model slightly outperforms another recent unsupervised video representation learning approach [24]. However, our approach uses an order of magnitude fewer parameters, less layers (5 layers vs 8 layers), and low-resolution video. Performance vs Data: We also experimented with varying the amount of labeled training data available to our fine-tuned network. Figure 4b reports performance versus the amount of labeled training data available. As expected, performance increases with more labeled data. The fine-tuned model shows an advantage in low data regimes: even with one eighth of the labeled data, the finetuned model still beats a randomly initialized network. Moreover, Figure 4c plots the relative accuracy gain over the fine-tuned model and the random initialization (fine-tuned performance divided by random initialized performance). This shows that fine-tuning with our model has larger relative gain over random initialization in cases with less labeled data, showing its utility in low-data regimes. 3.4 Future Generation We investigate whether our approach can be used to generate the future of a static image. Specifically, given a static image x0, can we extrapolate a video of possible consequent frames? Encoder: We utilize the same model as our two-stream model, however we must make one change in order to input the static image instead of the latent code. We can do this by attaching a fivelayer convolutional network to the front of the generator which encodes the image into the latent space, similar to a conditional generative adversarial network [23]. The rest of the generator and discriminator networks remain the same. However, we add an additional loss term that minimizes the L1 distance between the input and the first frame of the generated image. We do this so that the generator creates videos consistent with the input image. We train from scratch with the objective: min wG max wD Ex∼px(x) [logD(x;wD)] + Ex0∼px0 (x0) [log (1−D(G(x0;wG);wD))] +Ex0∼px0 (x0) [ λ‖x0 −G0(x0;wG)‖22 ] (3) where x0 is the first frame of the input, G0(·) is the first frame of the generated video, and λ ∈ R is a hyperparameter. The discriminator will try to classify realistic frames and realistic motions as before, while the generator will try to produce a realistic video such that the first frame is reconstructed well. Results: We qualitatively show a few examples of our approach in Figure 5 using held-out testing videos. Although the extrapolations are rarely correct, they often have fairly plausible motions. The most common failure is that the generated video has a scene similar but not identical to the input image, such as by changing colors or dropping/hallucinating objects. The former could be solved by a color histogram normalization in post-processing (which we did not do for simplicity), while we suspect the latter will require building more powerful generative models. The generated videos are usually not the correct video, but we observe that often the motions are plausible. We are not aware of an existing approach that can directly generate multi-frame videos from a single static image. [33, 22] can generate video, but they require multiple input frames and empirically become blurry after extrapolating many frames. [43, 50] can predict optic flow from a single image, but they do not generate several frames of motion and may be susceptible to warping artifacts. We believe this experiment shows an important application of generative video models. Visualizing Representation: Since generating the future requires understanding how objects move, the network may need learn to recognize some objects internally, even though it is not supervised to do so. Figure 6 visualizes some activations of hidden units in the third convolutional layer. While not at all units are semantic, some of the units tend to be selective for objects that are sources of motion, such as people or train tracks. These visualizations suggest that scaling up future generation might be a promising supervisory signal for object recognition and complementary to [27, 5, 46]. Conclusion: Understanding scene dynamics will be crucial for the next generation of computer vision systems. In this work, we explored how to learn some dynamics from large amounts of unlabeled video by capitalizing on adversarial learning methods. Since annotating dynamics is expensive, we believe learning from unlabeled data is a promising direction. While we are still a long way from fully harnessing the potential of unlabeled video, our experiments support that abundant unlabeled video can be lucrative for both learning to generate videos and learning visual representations. Acknowledgements: We thank Yusuf Aytar for dataset discussions. We thank MIT TIG, especially Garrett Wollman, for troubleshooting issues on storing the 26 TB of video. We are grateful for the Torch7 community for answering many questions. NVidia donated GPUs used for this research. This work was supported by NSF grant #1524817 to AT, START program at UMBC to HP, and the Google PhD fellowship to CV.
1. What is the main contribution of the paper in video generation? 2. What are the strengths of the proposed approach, particularly in its ability to generate different motions for the background and foreground? 3. What are the weaknesses of the paper regarding its claims and experiments, especially in terms of data cleaning and lacking comparisons with other works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any concerns regarding the animation of images?
Review
Review The paper to propose to use a two stream architecture to generate videos, where one stream generates the background frames and another one generates the foreground frames and a set of masks. By having two streams it allows to generate different motions for the background and the foreground. The model is trained using a VGAN approach and trained small video clips, the authors show promising results for video generation, video representation learning and for animating images.After so much recent work in image generation using Generative Adversarial Networks it is nice to see a novel proposal for videos. The paper is well written with nice examples. It is not very clear how much of the performance is due to the data cleaning, for instead filtered videos with a pre-trained image model and camera stabilization. There are no experiments using unfiltered unlabeled videos for generation, or without stabilization. The results in action recognition in UC-101 are ok, but a comparison with a model using a pre-trained on image classification is missing. There is not evaluation of the animated images.
NIPS
Title Generating Videos with Scene Dynamics Abstract We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation. 1 Introduction Understanding object motions and scene dynamics is a core problem in computer vision. For both video recognition tasks (e.g., action classification) and video generation tasks (e.g., future prediction), a model of how scenes transform is needed. However, creating a model of dynamics is challenging because there is a vast number of ways that objects and scenes can change. In this work, we are interested in the fundamental problem of learning how scenes transform with time. We believe investigating this question may yield insight into the design of predictive models for computer vision. However, since annotating this knowledge is both expensive and ambiguous, we instead seek to learn it directly from large amounts of in-the-wild, unlabeled video. Unlabeled video has the advantage that it can be economically acquired at massive scales yet contains rich temporal signals “for free” because frames are temporally coherent. With the goal of capturing some of the temporal knowledge contained in large amounts of unlabeled video, we present an approach that learns to generate tiny videos which have fairly realistic dynamics and motions. To do this, we capitalize on recent advances in generative adversarial networks [9, 31, 4], which we extend to video. We introduce a two-stream generative model that explicitly models the foreground separately from the background, which allows us to enforce that the background is stationary, helping the network to learn which objects move and which do not. Our experiments suggest that our model has started to learn about dynamics. In our generation experiments, we show that our model can generate scenes with plausible motions.1 We conducted a psychophysical study where we asked over a hundred people to compare generated videos, and people preferred videos from our full model more often. Furthermore, by making the model conditional on an input image, our model can sometimes predict a plausible (but “incorrect”) future. In our recognition experiments, we show how our model has learned, without supervision, useful features for human action classification. Moreover, visualizations of the learned representation suggest future generation may be a promising supervisory signal for learning to recognize objects of motion. 1See http://mit.edu/vondrick/tinyvideo for the animated videos. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The primary contribution of this paper is showing how to leverage large amounts of unlabeled video in order to acquire priors about scene dynamics. The secondary contribution is the development of a generative model for video. The remainder of this paper describes these contributions in detail. In section 2, we describe our generative model for video. In section 3, we present several experiments to analyze the generative model. We believe that generative video models can impact many applications, such as in simulations, forecasting, and representation learning. 1.1 Related Work This paper builds upon early work in generative video models [29]. However, previous work has focused mostly on small patches, and evaluated it for video clustering. Here, we develop a generative video model for natural scenes using state-of-the-art adversarial learning methods [9, 31]. Conceptually, our work is related to studies into fundamental roles of time in computer vision [30, 12, 2, 7, 24]. However, here we are interested in generating short videos with realistic temporal semantics, rather than detecting or retrieving them. Our technical approach builds on recent work in generative adversarial networks for image modeling [9, 31, 4, 47, 28], which we extend to video. To our knowledge, there has been relatively little work extensively studying generative adversarial networks for video. Most notably, [22] also uses adversarial networks for video frame prediction. Our framework can generate videos for longer time scales and learn representations of video using unlabeled data. Our work is also related to efforts to predict the future in video [33, 22, 43, 50, 42, 17, 8, 54] as well as concurrent work in future generation [6, 15, 20, 49, 55]. Often these works may be viewed as a generative model conditioned on the past frames. Our work complements these efforts in two ways. Firstly, we explore how to generate videos from scratch (not conditioned on the past). Secondly, while prior work has used generative models in video settings mostly on a single frame, we jointly generate a sequence of frames (32 frames) using spatio-temporal convolutional networks, which may help prevent drifts due to errors accumulating. We leverage approaches for recognizing actions in video with deep networks, but apply them for video generation instead. We use spatio-temporal 3D convolutions to model videos [40], but we use fractionally strided convolutions [51] instead because we are interested in generation. We also use two-streams to model video [34], but apply them for video generation instead of action recognition. However, our approach does not explicitly use optical flow; instead, we expect the network to learn motion features on its own. Finally, this paper is related to a growing body of work that capitalizes on large amounts of unlabeled video for visual recognition tasks [18, 46, 37, 13, 24, 25, 3, 32, 26, 27, 19, 41, 42, 1]. We instead leverage large amounts of unlabeled video for generation. 2 Generative Models for Video In this section, we present a generative model for videos. We propose to use generative adversarial networks [9], which have been shown to have good performance on image generation [31, 4]. 2.1 Review: Generative Adversarial Networks The main idea behind generative adversarial networks [9] is to train two networks: a generator network G tries to produce a video, and a discriminator network D tries to distinguish between “real“ videos and “fake” generated videos. One can train these networks against each other in a min-max game where the generator seeks to maximally fool the discriminator while simultaneously the discriminator seeks to detect which examples are fake: min wG max wD Ex∼px(x) [logD(x;wD)] + Ez∼pz(z) [log (1−D(G(z;wG);wD))] (1) where z is a latent “code” that is often sampled from a simple distribution (such as a normal distribution) and x ∼ px(x) samples from the data distribution. In practice, since we do not know the true distribution of data px(x), we can estimate the expectation by drawing from our dataset. Since we will optimize Equation 1 with gradient based methods (SGD), the two networks G and D can take on any form appropriate for the task as long as they are differentiable with respect to parameters wG and wD. We design a G and D for video. 2.2 Generator Network The input to the generator network is a low-dimensional latent code z ∈ Rd. In most cases, this code can be sampled from a distribution (e.g., Gaussian). Given a code z, we wish to produce a video. We design the architecture of the generator network with a few principles in mind. Firstly, we want the network to be invariant to translations in both space and time. Secondly, we want a low-dimensional z to be able to produce a high-dimensional output (video). Thirdly, we want to assume a stationary camera and take advantage of the the property that usually only objects move. We are interested in modeling object motion, and not the motion of cameras. Moreover, since modeling that the background is stationary is important in video recognition tasks [44], it may be helpful in video generation as well. We explore two different network architectures: One Stream Architecture: We combine spatio-temporal convolutions [14, 40] with fractionally strided convolutions [51, 31] to generate video. Three dimensional convolutions provide spatial and temporal invariance, while fractionally strided convolutions can upsample efficiently in a deep network, allowing z to be low-dimensional. We use an architecture inspired by [31], except extended in time. We use a five layer network of 4× 4× 4 convolutions with a stride of 2, except for the first layer which uses 2× 4× 4 convolutions (time× width× height). We found that these kernel sizes provided an appropriate balance between training speed and quality of generations. Two Stream Architecture: The one stream architecture does not model that the world is stationary and usually only objects move. We experimented with making this behavior explicit in the model. We use an architecture that enforces a static background and moving foreground. We use a two-stream architecture where the generator is governed by the combination: G2(z) = m(z) f(z) + (1−m(z)) b(z). (2) Our intention is that 0 ≥ m(z) ≥ 1 can be viewed as a spatio-temporal mask that selects either the foreground f(z) model or the background model b(z) for each pixel location and timestep. To enforce a background model in the generations, b(z) produces a spatial image that is replicated over time, while f(z) produces a spatio-temporal cuboid masked by m(z). By summing the foreground model with the background model, we can obtain the final generation. Note that is element-wise multiplication, and we replicate singleton dimensions to match its corresponding tensor. During learning, we also add to the objective a small sparsity prior on the mask λ‖m(z)‖1 for λ = 0.1, which we found helps encourage the network to use the background stream. We use fractionally strided convolutional networks for m(z), f(z), and b(z). For f(z), we use the same network as the one-stream architecture, and for b(z) we use a similar generator architecture to [31]. We only use their architecture; we do not initialize with their learned weights. To create the mask m(z), we use a network that shares weights with f(z) except the last layer, which has only one output channel. We use a sigmoid activation function for the mask. We visualize the two-stream architecture in Figure 1. In our experiments, the generator produces 64× 64 videos for 32 frames, which is a little over a second. 2.3 Discriminator Network The discriminator needs to be able to solve two problems: firstly, it must be able to classify realistic scenes from synthetically generated scenes, and secondly, it must be able to recognize realistic motion between frames. We chose to design the discriminator to be able to solve both of these tasks with the same model. We use a five-layer spatio-temporal convolutional network with kernels 4× 4× 4 so that the hidden layers can learn both visual models and motion models. We design the architecture to be reverse of the foreground stream in the generator, replacing fractionally strided convolutions with strided convolutions (to down-sample instead of up-sample), and replacing the last layer to output a binary classification (real or not). 2.4 Learning and Implementation We train the generator and discriminator with stochastic gradient descent. We alternate between maximizing the loss w.r.t. wD and minimizing the loss w.r.t. wG until a fixed number of iterations. All networks are trained from scratch. Our implementation is based off a modified version of [31] in Torch7. We used a more numerically stable implementation of cross entropy loss to prevent overflow. We use the Adam [16] optimizer and a fixed learning rate of 0.0002 and momentum term of 0.5. The latent code has 100 dimensions, which we sample from a normal distribution. We use a batch size of 64. We initialize all weights with zero mean Gaussian noise with standard deviation 0.01. We normalize all videos to be in the range [−1, 1]. We use batch normalization [11] followed by the ReLU activation functions after every layer in the generator, except the output layers, which uses tanh. Following [31], we also use batch normalization in the discriminator except for the first layer and we instead use leaky ReLU [48]. Training typically took several days on a GPU. 3 Experiments We experiment with the generative adversarial network for video (VGAN) on both generation and recognition tasks. We also show several qualitative examples online. 3.1 Unlabeled Video Dataset We use a large amount of unlabeled video to train our model. We downloaded over two million videos from Flickr [39] by querying for popular Flickr tags as well as querying for common English words. From this pool, we created two datasets: Unfiltered Unlabeled Videos: We use these videos directly, without any filtering, for representation learning. The dataset is over 5, 000 hours. Filtered Unlabeled Videos: To evaluate generations, we use the Places2 pre-trained model [53] to automatically filter the videos by scene category. Since image/video generation is a challenging problem, we assembled this dataset to better diagnose strengths and weaknesses of approaches. We experimented with four scene categories: golf course, hospital rooms (babies), beaches, and train station. Stabilization: As we are interested in the movement of objects and not camera shake, we stabilize the camera motion for both datasets. We extract SIFT keypoints [21], use RANSAC to estimate a homography (rotation, translation, scale) between adjacent frames, and warp frames to minimize background motion. When the homography moved out of the frame, we fill in the missing values using the previous frames. If the homography has too large of a re-projection error, we ignore that segment of the video for training, which only happened 3% of the time. The only other pre-processing we do is normalizing the videos to be in the range [−1, 1]. We extract frames at native frame rate (25 fps). We use 32-frame videos of spatial resolution 64× 64. 3.2 Video Generation We evaluate both the one-stream and two-stream generator. We trained a generator for each scene category in our filtered dataset. We perform both a qualitative evaluation as well as a quantitative psychophysical evaluation to measure the perceptual quality of the generated videos. Qualitative Results: We show several examples of the videos generated from our model in Figure 2. We observe that a) the generated scenes tend to be fairly sharp and that b) the motion patterns are generally correct for their respective scene. For example, the beach model tends to produce beaches with crashing waves, the golf model produces people walking on grass, and the train station generations usually show train tracks and a train with windows rapidly moving along it. While the model usually learns to put motion on the right objects, one common failure mode is that the objects lack resolution. For example, the people in the beaches and golf courses are often blobs. Nevertheless, we believe it is promising that our model can generate short motions. We visualize the behavior of the two-stream architecture in Figure 3. Baseline: Since to our knowledge there are no existing large-scale generative models of video ([33] requires an input frame), we develop a simple but reasonable baseline for this task. We train an autoencoder over our data. The encoder is similar to the discriminator network (except producing 100 dimensional code), while the decoder follows the two-stream generator network. Hence, the baseline autoencoder network has a similar number of parameters as our full approach. We then feed examples through the encoder and fit a Gaussian Mixture Model (GMM) with 256 components over the 100 dimensional hidden space. To generate a novel video, we sample from this GMM, and feed the sample through the decoder. Evaluation Metric: We quantitatively evaluate our generation using a psychophysical two-alternative forced choice with workers on Amazon Mechanical Turk. We show a worker two random videos, and ask them “Which video is more realistic?” We collected over 13, 000 opinions across 150 unique workers. We paid workers one cent per comparison, and required workers to historically have a 95% approval rating on MTurk. We experimented with removing bad workers that frequently said real videos were not realistic, but the relative rankings did not change. We designed this experiment following advice from [38], which advocates evaluating generative models for the task at hand. In our case, we are interested in perceptual quality of motion. We consider a model X better than model Y if workers prefer generations from X more than generations from Y. Quantitative Results: Table 1 shows the percentage of times that workers preferred generations from one model over another. Workers consistently prefer videos from the generative adversarial network more than an autoencoder. Additionally, workers show a slight preference for the two-stream architecture, especially in scenes where the background is large (e.g., golf course, beach). Although the one-stream architecture is capable of generating stationary backgrounds, it may be difficult to find this solution, motivating a more explicit architecture. The one-stream architecture generally produces high-frequency temporal flickering in the background. To evaluate whether static frames are better than our generations, we also ask workers to choose between our videos and a static frame, and workers only chose the static frame 38% of the time, suggesting our model produces more realistic motion than static frames on average. Finally, while workers generally can distinguish real videos from generated videos, the workers show the most confusion with our two-stream model compared to baselines, suggesting the two-stream generations may be more realistic on average. 3.3 Video Representation Learning We also experimented with using our model as a way to learn unsupervised representations for video. We train our two-stream model with over 5, 000 hours of unfiltered, unlabeled videos from Flickr. We then fine-tune the discriminator on the task of interest (e.g., action recognition) using a relatively small set of labeled video. To do this, we replace the last layer (which is a binary classifier) with a K-way softmax classifier. We also add dropout [36] to the penultimate layer to reduce overfitting. Action Classification: We evaluated performance on classifying actions on UCF101 [35]. We report accuracy in Figure 4a. Initializing the network with the weights learned from the generative adversarial network outperforms a randomly initialized network, suggesting that it has learned an useful internal representation for video. Interestingly, while a randomly initialized network under-performs hand-crafted STIP features [35], the network initialized with our model significantly outperforms it. We also experimented with training a logistic regression on only the last layer, which performed worse. Finally, our model slightly outperforms another recent unsupervised video representation learning approach [24]. However, our approach uses an order of magnitude fewer parameters, less layers (5 layers vs 8 layers), and low-resolution video. Performance vs Data: We also experimented with varying the amount of labeled training data available to our fine-tuned network. Figure 4b reports performance versus the amount of labeled training data available. As expected, performance increases with more labeled data. The fine-tuned model shows an advantage in low data regimes: even with one eighth of the labeled data, the finetuned model still beats a randomly initialized network. Moreover, Figure 4c plots the relative accuracy gain over the fine-tuned model and the random initialization (fine-tuned performance divided by random initialized performance). This shows that fine-tuning with our model has larger relative gain over random initialization in cases with less labeled data, showing its utility in low-data regimes. 3.4 Future Generation We investigate whether our approach can be used to generate the future of a static image. Specifically, given a static image x0, can we extrapolate a video of possible consequent frames? Encoder: We utilize the same model as our two-stream model, however we must make one change in order to input the static image instead of the latent code. We can do this by attaching a fivelayer convolutional network to the front of the generator which encodes the image into the latent space, similar to a conditional generative adversarial network [23]. The rest of the generator and discriminator networks remain the same. However, we add an additional loss term that minimizes the L1 distance between the input and the first frame of the generated image. We do this so that the generator creates videos consistent with the input image. We train from scratch with the objective: min wG max wD Ex∼px(x) [logD(x;wD)] + Ex0∼px0 (x0) [log (1−D(G(x0;wG);wD))] +Ex0∼px0 (x0) [ λ‖x0 −G0(x0;wG)‖22 ] (3) where x0 is the first frame of the input, G0(·) is the first frame of the generated video, and λ ∈ R is a hyperparameter. The discriminator will try to classify realistic frames and realistic motions as before, while the generator will try to produce a realistic video such that the first frame is reconstructed well. Results: We qualitatively show a few examples of our approach in Figure 5 using held-out testing videos. Although the extrapolations are rarely correct, they often have fairly plausible motions. The most common failure is that the generated video has a scene similar but not identical to the input image, such as by changing colors or dropping/hallucinating objects. The former could be solved by a color histogram normalization in post-processing (which we did not do for simplicity), while we suspect the latter will require building more powerful generative models. The generated videos are usually not the correct video, but we observe that often the motions are plausible. We are not aware of an existing approach that can directly generate multi-frame videos from a single static image. [33, 22] can generate video, but they require multiple input frames and empirically become blurry after extrapolating many frames. [43, 50] can predict optic flow from a single image, but they do not generate several frames of motion and may be susceptible to warping artifacts. We believe this experiment shows an important application of generative video models. Visualizing Representation: Since generating the future requires understanding how objects move, the network may need learn to recognize some objects internally, even though it is not supervised to do so. Figure 6 visualizes some activations of hidden units in the third convolutional layer. While not at all units are semantic, some of the units tend to be selective for objects that are sources of motion, such as people or train tracks. These visualizations suggest that scaling up future generation might be a promising supervisory signal for object recognition and complementary to [27, 5, 46]. Conclusion: Understanding scene dynamics will be crucial for the next generation of computer vision systems. In this work, we explored how to learn some dynamics from large amounts of unlabeled video by capitalizing on adversarial learning methods. Since annotating dynamics is expensive, we believe learning from unlabeled data is a promising direction. While we are still a long way from fully harnessing the potential of unlabeled video, our experiments support that abundant unlabeled video can be lucrative for both learning to generate videos and learning visual representations. Acknowledgements: We thank Yusuf Aytar for dataset discussions. We thank MIT TIG, especially Garrett Wollman, for troubleshooting issues on storing the 26 TB of video. We are grateful for the Torch7 community for answering many questions. NVidia donated GPUs used for this research. This work was supported by NSF grant #1524817 to AT, START program at UMBC to HP, and the Google PhD fellowship to CV.
1. What is the focus of the paper regarding generative adversarial networks? 2. What are the strengths of the proposed approach, particularly in terms of video generation and applications? 3. Do you have any concerns or suggestions regarding the architecture or evaluation of the method? 4. How does the reviewer assess the novelty and effectiveness of the proposed technique? 5. Are there any limitations or areas for improvement in the paper's content?
Review
Review In this paper the authors use the generative adversarial network (GAN), specifically DC-GAN in order to generate tiny videos. They do so by using a two stream architecture. One generates the background still image and the other generates the foreground motion and mask. The mask comprises of a separate last layer of the foreground generator that uses sigmoidal activation. The authors in addition to generating videos from noise, show applications for action classification that is competitive and extrapolation of single frame image. The proposed work is novel and the authors have provided sufficiently convincing evaluation of their method. The results for specific class videos look reasonable ( sometimes confusing human evaluators). Overall I think that the method and evaluation are good enough to justify acceptance of the paper. One point could be that the evaluation could perhaps consider an additional architecture where one initially generates a background frame that is animated in a one stream architecture. Right now the motion is separately generated from the background and is combined. In this case, the generation would be more coupled. In any case, this architecture is partially evaluated when the authors consider extrapolation of single frame images. A minor point is that few videos are available for actual evaluation. A thorough evaluation with more videos would be appreciated.
NIPS
Title Generating Videos with Scene Dynamics Abstract We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation. 1 Introduction Understanding object motions and scene dynamics is a core problem in computer vision. For both video recognition tasks (e.g., action classification) and video generation tasks (e.g., future prediction), a model of how scenes transform is needed. However, creating a model of dynamics is challenging because there is a vast number of ways that objects and scenes can change. In this work, we are interested in the fundamental problem of learning how scenes transform with time. We believe investigating this question may yield insight into the design of predictive models for computer vision. However, since annotating this knowledge is both expensive and ambiguous, we instead seek to learn it directly from large amounts of in-the-wild, unlabeled video. Unlabeled video has the advantage that it can be economically acquired at massive scales yet contains rich temporal signals “for free” because frames are temporally coherent. With the goal of capturing some of the temporal knowledge contained in large amounts of unlabeled video, we present an approach that learns to generate tiny videos which have fairly realistic dynamics and motions. To do this, we capitalize on recent advances in generative adversarial networks [9, 31, 4], which we extend to video. We introduce a two-stream generative model that explicitly models the foreground separately from the background, which allows us to enforce that the background is stationary, helping the network to learn which objects move and which do not. Our experiments suggest that our model has started to learn about dynamics. In our generation experiments, we show that our model can generate scenes with plausible motions.1 We conducted a psychophysical study where we asked over a hundred people to compare generated videos, and people preferred videos from our full model more often. Furthermore, by making the model conditional on an input image, our model can sometimes predict a plausible (but “incorrect”) future. In our recognition experiments, we show how our model has learned, without supervision, useful features for human action classification. Moreover, visualizations of the learned representation suggest future generation may be a promising supervisory signal for learning to recognize objects of motion. 1See http://mit.edu/vondrick/tinyvideo for the animated videos. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The primary contribution of this paper is showing how to leverage large amounts of unlabeled video in order to acquire priors about scene dynamics. The secondary contribution is the development of a generative model for video. The remainder of this paper describes these contributions in detail. In section 2, we describe our generative model for video. In section 3, we present several experiments to analyze the generative model. We believe that generative video models can impact many applications, such as in simulations, forecasting, and representation learning. 1.1 Related Work This paper builds upon early work in generative video models [29]. However, previous work has focused mostly on small patches, and evaluated it for video clustering. Here, we develop a generative video model for natural scenes using state-of-the-art adversarial learning methods [9, 31]. Conceptually, our work is related to studies into fundamental roles of time in computer vision [30, 12, 2, 7, 24]. However, here we are interested in generating short videos with realistic temporal semantics, rather than detecting or retrieving them. Our technical approach builds on recent work in generative adversarial networks for image modeling [9, 31, 4, 47, 28], which we extend to video. To our knowledge, there has been relatively little work extensively studying generative adversarial networks for video. Most notably, [22] also uses adversarial networks for video frame prediction. Our framework can generate videos for longer time scales and learn representations of video using unlabeled data. Our work is also related to efforts to predict the future in video [33, 22, 43, 50, 42, 17, 8, 54] as well as concurrent work in future generation [6, 15, 20, 49, 55]. Often these works may be viewed as a generative model conditioned on the past frames. Our work complements these efforts in two ways. Firstly, we explore how to generate videos from scratch (not conditioned on the past). Secondly, while prior work has used generative models in video settings mostly on a single frame, we jointly generate a sequence of frames (32 frames) using spatio-temporal convolutional networks, which may help prevent drifts due to errors accumulating. We leverage approaches for recognizing actions in video with deep networks, but apply them for video generation instead. We use spatio-temporal 3D convolutions to model videos [40], but we use fractionally strided convolutions [51] instead because we are interested in generation. We also use two-streams to model video [34], but apply them for video generation instead of action recognition. However, our approach does not explicitly use optical flow; instead, we expect the network to learn motion features on its own. Finally, this paper is related to a growing body of work that capitalizes on large amounts of unlabeled video for visual recognition tasks [18, 46, 37, 13, 24, 25, 3, 32, 26, 27, 19, 41, 42, 1]. We instead leverage large amounts of unlabeled video for generation. 2 Generative Models for Video In this section, we present a generative model for videos. We propose to use generative adversarial networks [9], which have been shown to have good performance on image generation [31, 4]. 2.1 Review: Generative Adversarial Networks The main idea behind generative adversarial networks [9] is to train two networks: a generator network G tries to produce a video, and a discriminator network D tries to distinguish between “real“ videos and “fake” generated videos. One can train these networks against each other in a min-max game where the generator seeks to maximally fool the discriminator while simultaneously the discriminator seeks to detect which examples are fake: min wG max wD Ex∼px(x) [logD(x;wD)] + Ez∼pz(z) [log (1−D(G(z;wG);wD))] (1) where z is a latent “code” that is often sampled from a simple distribution (such as a normal distribution) and x ∼ px(x) samples from the data distribution. In practice, since we do not know the true distribution of data px(x), we can estimate the expectation by drawing from our dataset. Since we will optimize Equation 1 with gradient based methods (SGD), the two networks G and D can take on any form appropriate for the task as long as they are differentiable with respect to parameters wG and wD. We design a G and D for video. 2.2 Generator Network The input to the generator network is a low-dimensional latent code z ∈ Rd. In most cases, this code can be sampled from a distribution (e.g., Gaussian). Given a code z, we wish to produce a video. We design the architecture of the generator network with a few principles in mind. Firstly, we want the network to be invariant to translations in both space and time. Secondly, we want a low-dimensional z to be able to produce a high-dimensional output (video). Thirdly, we want to assume a stationary camera and take advantage of the the property that usually only objects move. We are interested in modeling object motion, and not the motion of cameras. Moreover, since modeling that the background is stationary is important in video recognition tasks [44], it may be helpful in video generation as well. We explore two different network architectures: One Stream Architecture: We combine spatio-temporal convolutions [14, 40] with fractionally strided convolutions [51, 31] to generate video. Three dimensional convolutions provide spatial and temporal invariance, while fractionally strided convolutions can upsample efficiently in a deep network, allowing z to be low-dimensional. We use an architecture inspired by [31], except extended in time. We use a five layer network of 4× 4× 4 convolutions with a stride of 2, except for the first layer which uses 2× 4× 4 convolutions (time× width× height). We found that these kernel sizes provided an appropriate balance between training speed and quality of generations. Two Stream Architecture: The one stream architecture does not model that the world is stationary and usually only objects move. We experimented with making this behavior explicit in the model. We use an architecture that enforces a static background and moving foreground. We use a two-stream architecture where the generator is governed by the combination: G2(z) = m(z) f(z) + (1−m(z)) b(z). (2) Our intention is that 0 ≥ m(z) ≥ 1 can be viewed as a spatio-temporal mask that selects either the foreground f(z) model or the background model b(z) for each pixel location and timestep. To enforce a background model in the generations, b(z) produces a spatial image that is replicated over time, while f(z) produces a spatio-temporal cuboid masked by m(z). By summing the foreground model with the background model, we can obtain the final generation. Note that is element-wise multiplication, and we replicate singleton dimensions to match its corresponding tensor. During learning, we also add to the objective a small sparsity prior on the mask λ‖m(z)‖1 for λ = 0.1, which we found helps encourage the network to use the background stream. We use fractionally strided convolutional networks for m(z), f(z), and b(z). For f(z), we use the same network as the one-stream architecture, and for b(z) we use a similar generator architecture to [31]. We only use their architecture; we do not initialize with their learned weights. To create the mask m(z), we use a network that shares weights with f(z) except the last layer, which has only one output channel. We use a sigmoid activation function for the mask. We visualize the two-stream architecture in Figure 1. In our experiments, the generator produces 64× 64 videos for 32 frames, which is a little over a second. 2.3 Discriminator Network The discriminator needs to be able to solve two problems: firstly, it must be able to classify realistic scenes from synthetically generated scenes, and secondly, it must be able to recognize realistic motion between frames. We chose to design the discriminator to be able to solve both of these tasks with the same model. We use a five-layer spatio-temporal convolutional network with kernels 4× 4× 4 so that the hidden layers can learn both visual models and motion models. We design the architecture to be reverse of the foreground stream in the generator, replacing fractionally strided convolutions with strided convolutions (to down-sample instead of up-sample), and replacing the last layer to output a binary classification (real or not). 2.4 Learning and Implementation We train the generator and discriminator with stochastic gradient descent. We alternate between maximizing the loss w.r.t. wD and minimizing the loss w.r.t. wG until a fixed number of iterations. All networks are trained from scratch. Our implementation is based off a modified version of [31] in Torch7. We used a more numerically stable implementation of cross entropy loss to prevent overflow. We use the Adam [16] optimizer and a fixed learning rate of 0.0002 and momentum term of 0.5. The latent code has 100 dimensions, which we sample from a normal distribution. We use a batch size of 64. We initialize all weights with zero mean Gaussian noise with standard deviation 0.01. We normalize all videos to be in the range [−1, 1]. We use batch normalization [11] followed by the ReLU activation functions after every layer in the generator, except the output layers, which uses tanh. Following [31], we also use batch normalization in the discriminator except for the first layer and we instead use leaky ReLU [48]. Training typically took several days on a GPU. 3 Experiments We experiment with the generative adversarial network for video (VGAN) on both generation and recognition tasks. We also show several qualitative examples online. 3.1 Unlabeled Video Dataset We use a large amount of unlabeled video to train our model. We downloaded over two million videos from Flickr [39] by querying for popular Flickr tags as well as querying for common English words. From this pool, we created two datasets: Unfiltered Unlabeled Videos: We use these videos directly, without any filtering, for representation learning. The dataset is over 5, 000 hours. Filtered Unlabeled Videos: To evaluate generations, we use the Places2 pre-trained model [53] to automatically filter the videos by scene category. Since image/video generation is a challenging problem, we assembled this dataset to better diagnose strengths and weaknesses of approaches. We experimented with four scene categories: golf course, hospital rooms (babies), beaches, and train station. Stabilization: As we are interested in the movement of objects and not camera shake, we stabilize the camera motion for both datasets. We extract SIFT keypoints [21], use RANSAC to estimate a homography (rotation, translation, scale) between adjacent frames, and warp frames to minimize background motion. When the homography moved out of the frame, we fill in the missing values using the previous frames. If the homography has too large of a re-projection error, we ignore that segment of the video for training, which only happened 3% of the time. The only other pre-processing we do is normalizing the videos to be in the range [−1, 1]. We extract frames at native frame rate (25 fps). We use 32-frame videos of spatial resolution 64× 64. 3.2 Video Generation We evaluate both the one-stream and two-stream generator. We trained a generator for each scene category in our filtered dataset. We perform both a qualitative evaluation as well as a quantitative psychophysical evaluation to measure the perceptual quality of the generated videos. Qualitative Results: We show several examples of the videos generated from our model in Figure 2. We observe that a) the generated scenes tend to be fairly sharp and that b) the motion patterns are generally correct for their respective scene. For example, the beach model tends to produce beaches with crashing waves, the golf model produces people walking on grass, and the train station generations usually show train tracks and a train with windows rapidly moving along it. While the model usually learns to put motion on the right objects, one common failure mode is that the objects lack resolution. For example, the people in the beaches and golf courses are often blobs. Nevertheless, we believe it is promising that our model can generate short motions. We visualize the behavior of the two-stream architecture in Figure 3. Baseline: Since to our knowledge there are no existing large-scale generative models of video ([33] requires an input frame), we develop a simple but reasonable baseline for this task. We train an autoencoder over our data. The encoder is similar to the discriminator network (except producing 100 dimensional code), while the decoder follows the two-stream generator network. Hence, the baseline autoencoder network has a similar number of parameters as our full approach. We then feed examples through the encoder and fit a Gaussian Mixture Model (GMM) with 256 components over the 100 dimensional hidden space. To generate a novel video, we sample from this GMM, and feed the sample through the decoder. Evaluation Metric: We quantitatively evaluate our generation using a psychophysical two-alternative forced choice with workers on Amazon Mechanical Turk. We show a worker two random videos, and ask them “Which video is more realistic?” We collected over 13, 000 opinions across 150 unique workers. We paid workers one cent per comparison, and required workers to historically have a 95% approval rating on MTurk. We experimented with removing bad workers that frequently said real videos were not realistic, but the relative rankings did not change. We designed this experiment following advice from [38], which advocates evaluating generative models for the task at hand. In our case, we are interested in perceptual quality of motion. We consider a model X better than model Y if workers prefer generations from X more than generations from Y. Quantitative Results: Table 1 shows the percentage of times that workers preferred generations from one model over another. Workers consistently prefer videos from the generative adversarial network more than an autoencoder. Additionally, workers show a slight preference for the two-stream architecture, especially in scenes where the background is large (e.g., golf course, beach). Although the one-stream architecture is capable of generating stationary backgrounds, it may be difficult to find this solution, motivating a more explicit architecture. The one-stream architecture generally produces high-frequency temporal flickering in the background. To evaluate whether static frames are better than our generations, we also ask workers to choose between our videos and a static frame, and workers only chose the static frame 38% of the time, suggesting our model produces more realistic motion than static frames on average. Finally, while workers generally can distinguish real videos from generated videos, the workers show the most confusion with our two-stream model compared to baselines, suggesting the two-stream generations may be more realistic on average. 3.3 Video Representation Learning We also experimented with using our model as a way to learn unsupervised representations for video. We train our two-stream model with over 5, 000 hours of unfiltered, unlabeled videos from Flickr. We then fine-tune the discriminator on the task of interest (e.g., action recognition) using a relatively small set of labeled video. To do this, we replace the last layer (which is a binary classifier) with a K-way softmax classifier. We also add dropout [36] to the penultimate layer to reduce overfitting. Action Classification: We evaluated performance on classifying actions on UCF101 [35]. We report accuracy in Figure 4a. Initializing the network with the weights learned from the generative adversarial network outperforms a randomly initialized network, suggesting that it has learned an useful internal representation for video. Interestingly, while a randomly initialized network under-performs hand-crafted STIP features [35], the network initialized with our model significantly outperforms it. We also experimented with training a logistic regression on only the last layer, which performed worse. Finally, our model slightly outperforms another recent unsupervised video representation learning approach [24]. However, our approach uses an order of magnitude fewer parameters, less layers (5 layers vs 8 layers), and low-resolution video. Performance vs Data: We also experimented with varying the amount of labeled training data available to our fine-tuned network. Figure 4b reports performance versus the amount of labeled training data available. As expected, performance increases with more labeled data. The fine-tuned model shows an advantage in low data regimes: even with one eighth of the labeled data, the finetuned model still beats a randomly initialized network. Moreover, Figure 4c plots the relative accuracy gain over the fine-tuned model and the random initialization (fine-tuned performance divided by random initialized performance). This shows that fine-tuning with our model has larger relative gain over random initialization in cases with less labeled data, showing its utility in low-data regimes. 3.4 Future Generation We investigate whether our approach can be used to generate the future of a static image. Specifically, given a static image x0, can we extrapolate a video of possible consequent frames? Encoder: We utilize the same model as our two-stream model, however we must make one change in order to input the static image instead of the latent code. We can do this by attaching a fivelayer convolutional network to the front of the generator which encodes the image into the latent space, similar to a conditional generative adversarial network [23]. The rest of the generator and discriminator networks remain the same. However, we add an additional loss term that minimizes the L1 distance between the input and the first frame of the generated image. We do this so that the generator creates videos consistent with the input image. We train from scratch with the objective: min wG max wD Ex∼px(x) [logD(x;wD)] + Ex0∼px0 (x0) [log (1−D(G(x0;wG);wD))] +Ex0∼px0 (x0) [ λ‖x0 −G0(x0;wG)‖22 ] (3) where x0 is the first frame of the input, G0(·) is the first frame of the generated video, and λ ∈ R is a hyperparameter. The discriminator will try to classify realistic frames and realistic motions as before, while the generator will try to produce a realistic video such that the first frame is reconstructed well. Results: We qualitatively show a few examples of our approach in Figure 5 using held-out testing videos. Although the extrapolations are rarely correct, they often have fairly plausible motions. The most common failure is that the generated video has a scene similar but not identical to the input image, such as by changing colors or dropping/hallucinating objects. The former could be solved by a color histogram normalization in post-processing (which we did not do for simplicity), while we suspect the latter will require building more powerful generative models. The generated videos are usually not the correct video, but we observe that often the motions are plausible. We are not aware of an existing approach that can directly generate multi-frame videos from a single static image. [33, 22] can generate video, but they require multiple input frames and empirically become blurry after extrapolating many frames. [43, 50] can predict optic flow from a single image, but they do not generate several frames of motion and may be susceptible to warping artifacts. We believe this experiment shows an important application of generative video models. Visualizing Representation: Since generating the future requires understanding how objects move, the network may need learn to recognize some objects internally, even though it is not supervised to do so. Figure 6 visualizes some activations of hidden units in the third convolutional layer. While not at all units are semantic, some of the units tend to be selective for objects that are sources of motion, such as people or train tracks. These visualizations suggest that scaling up future generation might be a promising supervisory signal for object recognition and complementary to [27, 5, 46]. Conclusion: Understanding scene dynamics will be crucial for the next generation of computer vision systems. In this work, we explored how to learn some dynamics from large amounts of unlabeled video by capitalizing on adversarial learning methods. Since annotating dynamics is expensive, we believe learning from unlabeled data is a promising direction. While we are still a long way from fully harnessing the potential of unlabeled video, our experiments support that abundant unlabeled video can be lucrative for both learning to generate videos and learning visual representations. Acknowledgements: We thank Yusuf Aytar for dataset discussions. We thank MIT TIG, especially Garrett Wollman, for troubleshooting issues on storing the 26 TB of video. We are grateful for the Torch7 community for answering many questions. NVidia donated GPUs used for this research. This work was supported by NSF grant #1524817 to AT, START program at UMBC to HP, and the Google PhD fellowship to CV.
1. What is the focus of the paper regarding video generative models? 2. What are the strengths of the proposed approach, particularly in terms of architecture and modeling? 3. What are the weaknesses of the paper, especially regarding the representation of video contents? 4. Do you have any concerns about the generated results, and how do they compare to realistic videos? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper proposed a tiny video generative model via GAN. The problem is challenging due to its huge dimensions. The authors propose a two-stream architecture by modeling foreground and background, respectively. The motion is modeled through 3D convolutional layers, which is a direct extension to the approaches of image generation through GAN [30, 23]. The results are visually pleasant in terms of appearance and motion. Moreover, this work also explores the task of unsupervised video representation learning through a well-trained discriminator. The paper is novel in terms of exploring large-scale video generative model. It is generally well written and easy to understand. The main concern is that, the proposed model is too general and mid to high level structured contents of a video are not well modeled even they are trained in each scene category respectively. As a proposition of video generation problem, understanding the contents of both frames and sequences (e.g. objects, actions, etc.) is essential. However from the proposed architecture, the discriminator can hardly catch any mid to high level visual information since no such regularizations are enforced. The results show numerous unrealistic object shapes and irregular motions, which are far from realistic ones. This also applies to the results of animating images. Although the problem is interesting, some results leave much room for improvement before it can be presented (with convincing results) in NIPS.
NIPS
Title Generating Videos with Scene Dynamics Abstract We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation. 1 Introduction Understanding object motions and scene dynamics is a core problem in computer vision. For both video recognition tasks (e.g., action classification) and video generation tasks (e.g., future prediction), a model of how scenes transform is needed. However, creating a model of dynamics is challenging because there is a vast number of ways that objects and scenes can change. In this work, we are interested in the fundamental problem of learning how scenes transform with time. We believe investigating this question may yield insight into the design of predictive models for computer vision. However, since annotating this knowledge is both expensive and ambiguous, we instead seek to learn it directly from large amounts of in-the-wild, unlabeled video. Unlabeled video has the advantage that it can be economically acquired at massive scales yet contains rich temporal signals “for free” because frames are temporally coherent. With the goal of capturing some of the temporal knowledge contained in large amounts of unlabeled video, we present an approach that learns to generate tiny videos which have fairly realistic dynamics and motions. To do this, we capitalize on recent advances in generative adversarial networks [9, 31, 4], which we extend to video. We introduce a two-stream generative model that explicitly models the foreground separately from the background, which allows us to enforce that the background is stationary, helping the network to learn which objects move and which do not. Our experiments suggest that our model has started to learn about dynamics. In our generation experiments, we show that our model can generate scenes with plausible motions.1 We conducted a psychophysical study where we asked over a hundred people to compare generated videos, and people preferred videos from our full model more often. Furthermore, by making the model conditional on an input image, our model can sometimes predict a plausible (but “incorrect”) future. In our recognition experiments, we show how our model has learned, without supervision, useful features for human action classification. Moreover, visualizations of the learned representation suggest future generation may be a promising supervisory signal for learning to recognize objects of motion. 1See http://mit.edu/vondrick/tinyvideo for the animated videos. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The primary contribution of this paper is showing how to leverage large amounts of unlabeled video in order to acquire priors about scene dynamics. The secondary contribution is the development of a generative model for video. The remainder of this paper describes these contributions in detail. In section 2, we describe our generative model for video. In section 3, we present several experiments to analyze the generative model. We believe that generative video models can impact many applications, such as in simulations, forecasting, and representation learning. 1.1 Related Work This paper builds upon early work in generative video models [29]. However, previous work has focused mostly on small patches, and evaluated it for video clustering. Here, we develop a generative video model for natural scenes using state-of-the-art adversarial learning methods [9, 31]. Conceptually, our work is related to studies into fundamental roles of time in computer vision [30, 12, 2, 7, 24]. However, here we are interested in generating short videos with realistic temporal semantics, rather than detecting or retrieving them. Our technical approach builds on recent work in generative adversarial networks for image modeling [9, 31, 4, 47, 28], which we extend to video. To our knowledge, there has been relatively little work extensively studying generative adversarial networks for video. Most notably, [22] also uses adversarial networks for video frame prediction. Our framework can generate videos for longer time scales and learn representations of video using unlabeled data. Our work is also related to efforts to predict the future in video [33, 22, 43, 50, 42, 17, 8, 54] as well as concurrent work in future generation [6, 15, 20, 49, 55]. Often these works may be viewed as a generative model conditioned on the past frames. Our work complements these efforts in two ways. Firstly, we explore how to generate videos from scratch (not conditioned on the past). Secondly, while prior work has used generative models in video settings mostly on a single frame, we jointly generate a sequence of frames (32 frames) using spatio-temporal convolutional networks, which may help prevent drifts due to errors accumulating. We leverage approaches for recognizing actions in video with deep networks, but apply them for video generation instead. We use spatio-temporal 3D convolutions to model videos [40], but we use fractionally strided convolutions [51] instead because we are interested in generation. We also use two-streams to model video [34], but apply them for video generation instead of action recognition. However, our approach does not explicitly use optical flow; instead, we expect the network to learn motion features on its own. Finally, this paper is related to a growing body of work that capitalizes on large amounts of unlabeled video for visual recognition tasks [18, 46, 37, 13, 24, 25, 3, 32, 26, 27, 19, 41, 42, 1]. We instead leverage large amounts of unlabeled video for generation. 2 Generative Models for Video In this section, we present a generative model for videos. We propose to use generative adversarial networks [9], which have been shown to have good performance on image generation [31, 4]. 2.1 Review: Generative Adversarial Networks The main idea behind generative adversarial networks [9] is to train two networks: a generator network G tries to produce a video, and a discriminator network D tries to distinguish between “real“ videos and “fake” generated videos. One can train these networks against each other in a min-max game where the generator seeks to maximally fool the discriminator while simultaneously the discriminator seeks to detect which examples are fake: min wG max wD Ex∼px(x) [logD(x;wD)] + Ez∼pz(z) [log (1−D(G(z;wG);wD))] (1) where z is a latent “code” that is often sampled from a simple distribution (such as a normal distribution) and x ∼ px(x) samples from the data distribution. In practice, since we do not know the true distribution of data px(x), we can estimate the expectation by drawing from our dataset. Since we will optimize Equation 1 with gradient based methods (SGD), the two networks G and D can take on any form appropriate for the task as long as they are differentiable with respect to parameters wG and wD. We design a G and D for video. 2.2 Generator Network The input to the generator network is a low-dimensional latent code z ∈ Rd. In most cases, this code can be sampled from a distribution (e.g., Gaussian). Given a code z, we wish to produce a video. We design the architecture of the generator network with a few principles in mind. Firstly, we want the network to be invariant to translations in both space and time. Secondly, we want a low-dimensional z to be able to produce a high-dimensional output (video). Thirdly, we want to assume a stationary camera and take advantage of the the property that usually only objects move. We are interested in modeling object motion, and not the motion of cameras. Moreover, since modeling that the background is stationary is important in video recognition tasks [44], it may be helpful in video generation as well. We explore two different network architectures: One Stream Architecture: We combine spatio-temporal convolutions [14, 40] with fractionally strided convolutions [51, 31] to generate video. Three dimensional convolutions provide spatial and temporal invariance, while fractionally strided convolutions can upsample efficiently in a deep network, allowing z to be low-dimensional. We use an architecture inspired by [31], except extended in time. We use a five layer network of 4× 4× 4 convolutions with a stride of 2, except for the first layer which uses 2× 4× 4 convolutions (time× width× height). We found that these kernel sizes provided an appropriate balance between training speed and quality of generations. Two Stream Architecture: The one stream architecture does not model that the world is stationary and usually only objects move. We experimented with making this behavior explicit in the model. We use an architecture that enforces a static background and moving foreground. We use a two-stream architecture where the generator is governed by the combination: G2(z) = m(z) f(z) + (1−m(z)) b(z). (2) Our intention is that 0 ≥ m(z) ≥ 1 can be viewed as a spatio-temporal mask that selects either the foreground f(z) model or the background model b(z) for each pixel location and timestep. To enforce a background model in the generations, b(z) produces a spatial image that is replicated over time, while f(z) produces a spatio-temporal cuboid masked by m(z). By summing the foreground model with the background model, we can obtain the final generation. Note that is element-wise multiplication, and we replicate singleton dimensions to match its corresponding tensor. During learning, we also add to the objective a small sparsity prior on the mask λ‖m(z)‖1 for λ = 0.1, which we found helps encourage the network to use the background stream. We use fractionally strided convolutional networks for m(z), f(z), and b(z). For f(z), we use the same network as the one-stream architecture, and for b(z) we use a similar generator architecture to [31]. We only use their architecture; we do not initialize with their learned weights. To create the mask m(z), we use a network that shares weights with f(z) except the last layer, which has only one output channel. We use a sigmoid activation function for the mask. We visualize the two-stream architecture in Figure 1. In our experiments, the generator produces 64× 64 videos for 32 frames, which is a little over a second. 2.3 Discriminator Network The discriminator needs to be able to solve two problems: firstly, it must be able to classify realistic scenes from synthetically generated scenes, and secondly, it must be able to recognize realistic motion between frames. We chose to design the discriminator to be able to solve both of these tasks with the same model. We use a five-layer spatio-temporal convolutional network with kernels 4× 4× 4 so that the hidden layers can learn both visual models and motion models. We design the architecture to be reverse of the foreground stream in the generator, replacing fractionally strided convolutions with strided convolutions (to down-sample instead of up-sample), and replacing the last layer to output a binary classification (real or not). 2.4 Learning and Implementation We train the generator and discriminator with stochastic gradient descent. We alternate between maximizing the loss w.r.t. wD and minimizing the loss w.r.t. wG until a fixed number of iterations. All networks are trained from scratch. Our implementation is based off a modified version of [31] in Torch7. We used a more numerically stable implementation of cross entropy loss to prevent overflow. We use the Adam [16] optimizer and a fixed learning rate of 0.0002 and momentum term of 0.5. The latent code has 100 dimensions, which we sample from a normal distribution. We use a batch size of 64. We initialize all weights with zero mean Gaussian noise with standard deviation 0.01. We normalize all videos to be in the range [−1, 1]. We use batch normalization [11] followed by the ReLU activation functions after every layer in the generator, except the output layers, which uses tanh. Following [31], we also use batch normalization in the discriminator except for the first layer and we instead use leaky ReLU [48]. Training typically took several days on a GPU. 3 Experiments We experiment with the generative adversarial network for video (VGAN) on both generation and recognition tasks. We also show several qualitative examples online. 3.1 Unlabeled Video Dataset We use a large amount of unlabeled video to train our model. We downloaded over two million videos from Flickr [39] by querying for popular Flickr tags as well as querying for common English words. From this pool, we created two datasets: Unfiltered Unlabeled Videos: We use these videos directly, without any filtering, for representation learning. The dataset is over 5, 000 hours. Filtered Unlabeled Videos: To evaluate generations, we use the Places2 pre-trained model [53] to automatically filter the videos by scene category. Since image/video generation is a challenging problem, we assembled this dataset to better diagnose strengths and weaknesses of approaches. We experimented with four scene categories: golf course, hospital rooms (babies), beaches, and train station. Stabilization: As we are interested in the movement of objects and not camera shake, we stabilize the camera motion for both datasets. We extract SIFT keypoints [21], use RANSAC to estimate a homography (rotation, translation, scale) between adjacent frames, and warp frames to minimize background motion. When the homography moved out of the frame, we fill in the missing values using the previous frames. If the homography has too large of a re-projection error, we ignore that segment of the video for training, which only happened 3% of the time. The only other pre-processing we do is normalizing the videos to be in the range [−1, 1]. We extract frames at native frame rate (25 fps). We use 32-frame videos of spatial resolution 64× 64. 3.2 Video Generation We evaluate both the one-stream and two-stream generator. We trained a generator for each scene category in our filtered dataset. We perform both a qualitative evaluation as well as a quantitative psychophysical evaluation to measure the perceptual quality of the generated videos. Qualitative Results: We show several examples of the videos generated from our model in Figure 2. We observe that a) the generated scenes tend to be fairly sharp and that b) the motion patterns are generally correct for their respective scene. For example, the beach model tends to produce beaches with crashing waves, the golf model produces people walking on grass, and the train station generations usually show train tracks and a train with windows rapidly moving along it. While the model usually learns to put motion on the right objects, one common failure mode is that the objects lack resolution. For example, the people in the beaches and golf courses are often blobs. Nevertheless, we believe it is promising that our model can generate short motions. We visualize the behavior of the two-stream architecture in Figure 3. Baseline: Since to our knowledge there are no existing large-scale generative models of video ([33] requires an input frame), we develop a simple but reasonable baseline for this task. We train an autoencoder over our data. The encoder is similar to the discriminator network (except producing 100 dimensional code), while the decoder follows the two-stream generator network. Hence, the baseline autoencoder network has a similar number of parameters as our full approach. We then feed examples through the encoder and fit a Gaussian Mixture Model (GMM) with 256 components over the 100 dimensional hidden space. To generate a novel video, we sample from this GMM, and feed the sample through the decoder. Evaluation Metric: We quantitatively evaluate our generation using a psychophysical two-alternative forced choice with workers on Amazon Mechanical Turk. We show a worker two random videos, and ask them “Which video is more realistic?” We collected over 13, 000 opinions across 150 unique workers. We paid workers one cent per comparison, and required workers to historically have a 95% approval rating on MTurk. We experimented with removing bad workers that frequently said real videos were not realistic, but the relative rankings did not change. We designed this experiment following advice from [38], which advocates evaluating generative models for the task at hand. In our case, we are interested in perceptual quality of motion. We consider a model X better than model Y if workers prefer generations from X more than generations from Y. Quantitative Results: Table 1 shows the percentage of times that workers preferred generations from one model over another. Workers consistently prefer videos from the generative adversarial network more than an autoencoder. Additionally, workers show a slight preference for the two-stream architecture, especially in scenes where the background is large (e.g., golf course, beach). Although the one-stream architecture is capable of generating stationary backgrounds, it may be difficult to find this solution, motivating a more explicit architecture. The one-stream architecture generally produces high-frequency temporal flickering in the background. To evaluate whether static frames are better than our generations, we also ask workers to choose between our videos and a static frame, and workers only chose the static frame 38% of the time, suggesting our model produces more realistic motion than static frames on average. Finally, while workers generally can distinguish real videos from generated videos, the workers show the most confusion with our two-stream model compared to baselines, suggesting the two-stream generations may be more realistic on average. 3.3 Video Representation Learning We also experimented with using our model as a way to learn unsupervised representations for video. We train our two-stream model with over 5, 000 hours of unfiltered, unlabeled videos from Flickr. We then fine-tune the discriminator on the task of interest (e.g., action recognition) using a relatively small set of labeled video. To do this, we replace the last layer (which is a binary classifier) with a K-way softmax classifier. We also add dropout [36] to the penultimate layer to reduce overfitting. Action Classification: We evaluated performance on classifying actions on UCF101 [35]. We report accuracy in Figure 4a. Initializing the network with the weights learned from the generative adversarial network outperforms a randomly initialized network, suggesting that it has learned an useful internal representation for video. Interestingly, while a randomly initialized network under-performs hand-crafted STIP features [35], the network initialized with our model significantly outperforms it. We also experimented with training a logistic regression on only the last layer, which performed worse. Finally, our model slightly outperforms another recent unsupervised video representation learning approach [24]. However, our approach uses an order of magnitude fewer parameters, less layers (5 layers vs 8 layers), and low-resolution video. Performance vs Data: We also experimented with varying the amount of labeled training data available to our fine-tuned network. Figure 4b reports performance versus the amount of labeled training data available. As expected, performance increases with more labeled data. The fine-tuned model shows an advantage in low data regimes: even with one eighth of the labeled data, the finetuned model still beats a randomly initialized network. Moreover, Figure 4c plots the relative accuracy gain over the fine-tuned model and the random initialization (fine-tuned performance divided by random initialized performance). This shows that fine-tuning with our model has larger relative gain over random initialization in cases with less labeled data, showing its utility in low-data regimes. 3.4 Future Generation We investigate whether our approach can be used to generate the future of a static image. Specifically, given a static image x0, can we extrapolate a video of possible consequent frames? Encoder: We utilize the same model as our two-stream model, however we must make one change in order to input the static image instead of the latent code. We can do this by attaching a fivelayer convolutional network to the front of the generator which encodes the image into the latent space, similar to a conditional generative adversarial network [23]. The rest of the generator and discriminator networks remain the same. However, we add an additional loss term that minimizes the L1 distance between the input and the first frame of the generated image. We do this so that the generator creates videos consistent with the input image. We train from scratch with the objective: min wG max wD Ex∼px(x) [logD(x;wD)] + Ex0∼px0 (x0) [log (1−D(G(x0;wG);wD))] +Ex0∼px0 (x0) [ λ‖x0 −G0(x0;wG)‖22 ] (3) where x0 is the first frame of the input, G0(·) is the first frame of the generated video, and λ ∈ R is a hyperparameter. The discriminator will try to classify realistic frames and realistic motions as before, while the generator will try to produce a realistic video such that the first frame is reconstructed well. Results: We qualitatively show a few examples of our approach in Figure 5 using held-out testing videos. Although the extrapolations are rarely correct, they often have fairly plausible motions. The most common failure is that the generated video has a scene similar but not identical to the input image, such as by changing colors or dropping/hallucinating objects. The former could be solved by a color histogram normalization in post-processing (which we did not do for simplicity), while we suspect the latter will require building more powerful generative models. The generated videos are usually not the correct video, but we observe that often the motions are plausible. We are not aware of an existing approach that can directly generate multi-frame videos from a single static image. [33, 22] can generate video, but they require multiple input frames and empirically become blurry after extrapolating many frames. [43, 50] can predict optic flow from a single image, but they do not generate several frames of motion and may be susceptible to warping artifacts. We believe this experiment shows an important application of generative video models. Visualizing Representation: Since generating the future requires understanding how objects move, the network may need learn to recognize some objects internally, even though it is not supervised to do so. Figure 6 visualizes some activations of hidden units in the third convolutional layer. While not at all units are semantic, some of the units tend to be selective for objects that are sources of motion, such as people or train tracks. These visualizations suggest that scaling up future generation might be a promising supervisory signal for object recognition and complementary to [27, 5, 46]. Conclusion: Understanding scene dynamics will be crucial for the next generation of computer vision systems. In this work, we explored how to learn some dynamics from large amounts of unlabeled video by capitalizing on adversarial learning methods. Since annotating dynamics is expensive, we believe learning from unlabeled data is a promising direction. While we are still a long way from fully harnessing the potential of unlabeled video, our experiments support that abundant unlabeled video can be lucrative for both learning to generate videos and learning visual representations. Acknowledgements: We thank Yusuf Aytar for dataset discussions. We thank MIT TIG, especially Garrett Wollman, for troubleshooting issues on storing the 26 TB of video. We are grateful for the Torch7 community for answering many questions. NVidia donated GPUs used for this research. This work was supported by NSF grant #1524817 to AT, START program at UMBC to HP, and the Google PhD fellowship to CV.
1. What is the main contribution of the paper regarding video generation? 2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to previous works? 3. How effective is the training data generation method in stabilizing images and ignoring unreliable samples? 4. Can you explain the three types of experiments conducted in the paper, including video generation, pre-training for action classification, and video conditioned on the first frame? 5. Why are larger resolutions and more diverse scene examples necessary for better evaluating the effectiveness of the method?
Review
Review The paper aims at generating short (32 frames) video segments with resolution 64x64. The proposed method relies on Generative Adversarial Network approach. There are 2 proposed architectures: straight-forward one and the one that generates static background and dynamic foreground separately. The training data is generated in a way that tries to "stabilize" images by compensating camera motion. This results in ignoring samples for which it can't be done reliably. There are 3 types of experiments: 1. video generation. Samples, generated from different architectures and self-implemented baseline, are compared with real videos by asking Amazon Mechanical Turk assessors to determine which of the two videos is real. Assessors prefer generated videos to real ones in 18% of the cases on average. 2. pre-training for action classification. The algorithm does not outperform current state-of-the-art Temporal Ordering. 3. generation of video conditioned on the first frame. Experiments demonstrating large motion of large objects at larger resolution would be very important for understanding if the method works reasonably. Many scenes shown in the results either almost do not move (like hospital 1,3,5,6,7,8,9,11,12,13,14,15) or provide strange-looking movements (like oscillating face in hospital 2,10 and many train videos). Larger resolution would certainly be helpful as assessors would be able to better see what is actually happening in the video. At the current resolution it is very hard to see what exactly is happening to moving objects. Most of the sea videos just show flickering of the water, how natural it looks is also only possible to say for larger resolution.
NIPS
Title Lower Bounds on Adversarial Robustness from Optimal Transport Abstract While progress has been made in understanding the robustness of machine learning classifiers to test-time adversaries (evasion attacks), fundamental questions remain unresolved. In this paper, we use optimal transport to characterize the minimum possible loss in an adversarial classification scenario. In this setting, an adversary receives a random labeled example from one of two classes, perturbs the example subject to a neighborhood constraint, and presents the modified example to the classifier. We define an appropriate cost function such that the minimum transportation cost between the distributions of the two classes determines the minimum 0− 1 loss for any classifier. When the classifier comes from a restricted hypothesis class, the optimal transportation cost provides a lower bound. We apply our framework to the case of Gaussian data with norm-bounded adversaries and explicitly show matching bounds for the classification and transport problems as well as the optimality of linear classifiers. We also characterize the sample complexity of learning in this setting, deriving and extending previously known results as a special case. Finally, we use our framework to study the gap between the optimal classification performance possible and that currently achieved by state-of-the-art robustly trained neural networks for datasets of interest, namely, MNIST, Fashion MNIST and CIFAR-10. 1 Introduction Machine learning (ML) has become ubiquitous due to its impressive performance in a wide variety of domains such as image recognition [48,72], natural language and speech processing [22,25,37], gameplaying [12,59,71] and aircraft collision avoidance [42]. This ubiquity, however, provides adversaries with both the opportunity and incentive to strategically fool machine learning systems during both the training (poisoning attacks) [5, 9, 40, 60, 67] and test (evasion attacks) [8, 17, 34, 57, 58, 63, 77] phases. In an evasion attack, an adversary adds imperceptible perturbations to inputs in the test phase to cause misclassification. A large number of adversarial example-based evasion attacks have been proposed against ML algorithms used for tasks such as image classification [8, 17, 19, 34, 63, 77], object detection [21, 53, 83], image segmentation [2, 31] and speech recognition [18, 86]; generative ∗Equal contribution. †Work done while at Princeton University 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. models for image data [45] and even reinforcement learning algorithms [38, 46]. These attacks have been carried out in black-box [7, 11, 20, 52, 61, 62, 77] as well as in physical settings [29, 49, 70, 74]. A wide variety of defenses based on adversarial training [34, 54, 78], input de-noising through transformations [6, 24, 28, 69, 84], distillation [65], ensembling [1, 4, 75] and feature nullification [81] were proposed to defend ML algorithms against evasion attacks, only for most to be rendered ineffective by stronger attacks [3, 14–16]. Iterative adversarial training [54] is a current state-of-theart empirical defense. Recently, defenses that rely on adversarial training and are provably robust to small perturbations have been proposed [35, 44, 66, 73] but are unable to achieve good generalization behavior on standard datasets such as CIFAR-10 [47]. In spite of an active line of research that has worked to characterize the difficulty of learning in the presence of evasion adversaries by analyzing the sample complexity of learning classifiers for known distributions [68] as well as in the distributionfree setting [23, 56, 85], fundamental questions remain unresolved. One such question is, what is the behavior of the optimal achievable loss in the presence of an adversary? In this paper, we derive bounds on the 0−1 loss of classifiers while classifying adversarially modified data at test time, which is often referred to as adversarial robustness. We first develop a framework that relates classification in the presence of an adversary and optimal transport with an appropriately defined adversarial cost function. For an arbitrary data distribution with two classes, we characterize optimal adversarial robustness in terms of the transportation distance between the classes. When the classifier comes from a restricted hypothesis class, we obtain a lower bound on the minimum possible 0− 1 loss (or equivalently, an upper bound on the maximum possible classification accuracy). We then consider the case of a mixture of two Gaussians and derive matching upper and lower bounds for adversarial robustness by framing it as a convex optimization problem and proving the optimality of linear classifiers. For an `∞ adversary, we also present the explicit solution for this optimization problem and analyze its properties. Further, we derive an expression for sample complexity with the assumption of a Gaussian prior on the mean of the Gaussians which allows us to independently match and extend the results from Schmidt et al. [68] as a special case. Finally, in our experiments, we find transportation costs between the classes of empirical distributions of interest such as MNIST [50], Fashion-MNIST [82] and CIFAR-10 [47] for adversaries bounded by `2 and `∞ distance constraints, and relate them to the classification loss of state-of-the-art robust classifiers. Our results demonstrate that as the adversarial budget increases, the gap between current robust classifiers and the lower bound increases. This effect is especially pronounced for the CIFAR10 dataset, providing a clear indication of the difficulty of robust classification for this dataset. What do these results imply? First, the effectiveness of any defense for a given dataset can be directly analyzed by comparing its robustness to the lower bound. In particular, this allows us to identify regimes of interest where robust classification is possible. Our bound can be used to decide whether a particular adversarial budget is big or small. Second, since our lower bound does not require any distributional assumptions on the data, we are able to directly apply it to empirical distributions, characterizing whether robust classification is possible. Further, in the Gaussian setting, the optimal classifier in the adversarial case depends explicitly on the adversary’s budget. The optimal classifier in the benign case (corresponding to a budget of 0), differs from that for non-zero budgets. This immediately establishes a trade-off between the benign accuracy and adversarial robustness achievable with a given classifier. This raises interesting questions about which classifier should actually be deployed and how large the trade-off is. From the explicit solution we derive in the Gaussian setting, we observe that non-robust features occur during classification due to a mismatch between the norms used by the adversary and that governing the data distribution. We expand upon this observation in Section 4.1, which was also made independently by Ilyas et al. [39]. Contributions: We summarize our contributions in this paper as follows: i) we develop a framework for finding general lower bounds for classification error in the presence of an adversary (adversarial robustness) using optimal transport, ii) we show matching upper and lower bounds for adversarial robustness as well as the sample complexity of attaining it for the case of Gaussian data and a convex, origin-symmetric constraint on the adversary and iii) we determine lower bounds on adversarial robustness for empirical datasets of interest and compare them to those of robustly trained classifiers. 2 Preliminaries and Notation In this section, we set up the problem of learning in the presence of an evasion adversary. Such an adversary presents the learner with adversarially modified examples at test time but does not interfere with the training process [17, 34, 77]. We also define notation for the rest of the paper and explain how other work on adversarial examples fits into our setting. We summarize the basic notation in Table 1. We now formally describe the learning problem. There is an unknown P ∈ P(X × {−1, 1}). The learner receives labeled training data (x,y) = ((x0, y0), . . . , (xn−1, yn−1)) ∼ Pn and must select a hypothesis h. The evasion adversary receives a labeled natural example (xTest, yTest) ∼ P and selects x̃ ∈ N(xTest), the set of adversarial examples in the neighborhood of xTest. The adversary gives x̃ to the learner and the learner must estimate yTest. Their performance is measured by the 0-1 loss, `(yTest, h(x̃)). Examples produced by the adversary are elements of a space X̃ . In most applications, X = X̃ , but we find it useful to distinguish them to clarify some definitions. We require N(x) to be nonempty so some choice of x̃ is always available. By taking X = X̃ and N(x) = {x}, we recover the standard problem of learning without an adversary. If N1, N2 are neighborhood functions and N1(x) ⊆ N2(x) for all x ∈ X , N2 represents a stronger adversary. When X = X̃ , a neighborhood function N can be defined using a distance d on X and an adversarial constraint β: N(x) = {x̃ : d(x, x̃) ≤ β}. This provides an ordered family of adversaries of varying strengths used in previous work [17, 34, 68]. The learner’s error rate under the data distribution P with an adversary constrained by the neighborhood function N is L(N,P, h) = E(x,y)∼P [maxx̃∈N(x) `(h(x̃), y)]. 3 Adversarial Robustness from Optimal transport In this section, we explain the connections between adversarially robust classification and optimal transport. At a high level, these arise from the following idea: if a pair of examples, one from each class, are adversarially indistinguishable, then any hypothesis can classify at most one of the examples correctly, By finding families of such pairs, one can obtain lower bounds on classification error rate. When the set of available hypotheses is as large as possible, the best of these lower bounds is tight. Section Roadmap: We will first review some basic concepts from optimal transport theory [80]. Then, we will define a cost function for adversarial classification as well as its associated potential functions that are needed to establish Kantorovich duality. We show how a coupling between the conditional distributions of the two classes can be obtained by composing couplings derived from the adversarial strategy and the total variation distance, which links hypothesis testing and transportation costs. Finally, we show that the potential functions have an interpretation in terms of classification, which leads to our theorem connecting adversarial robustness to the optimal transport cost. 3.1 Basic definitions from optimal transport In this section, we use capital letters for random variables and lowercase letters for points in spaces. Couplings A coupling between probability distributions PX on X and PY on Y is a joint distribution on X × Y with marginals PX and PY . Let Π(PX , PY ) be the set of such couplings. Definition 1 (Optimal transport cost). For a cost function c : X × Y → R ∪ {+∞} and marginal distributions PX and PY , the optimal transport cost is C(PX , PY ) = inf PXY ∈Π(PX ,PY ) E(X,Y )∼PXY [c(X,Y )]. (1) Potential functions and Kantorovich duality There is a dual characterization of optimal transport cost in terms of potential functions which we use to make the connection between the transport and classification problems. Definition 2 (Potential functions). Functions f : X → R and g : Y → R are potential functions for the cost c if g(y)− f(x) ≤ c(x, y) for all (x, y) ∈ X × Y . A pair of potential functions provide a one-dimensional representation of the spaces X and Y . This representation must be be faithful to the cost structure on the original spaces: if a pair of points (x, y) are close in transportation cost, then f(x) must be close to g(y). In the dual optimization problem for optimal transport cost, we search for a representation that separates PX from PY as much as possible: C(PX , PY ) = sup f,g EY∼PY [g(Y )]− EX∼PX [f(X)]. (2) For any choices of f , g, and PXY , it is clear that E[g(Y )]− E[f(X)] ≤ E[c(X,Y )]. Kantorovich duality states that there are in fact choices for f and g that attain equality. Define the dual of f relative to c to be f c(y) = infx c(x, y) + f(x). This is the largest function that forms a potential for c when paired with with f . In (2), it is sufficient to optimize over pairs (f, f c). Compositions The composition of cost functions c : X × Y → R and c′ : Y × Z → R is (c ◦ c′) : X × Z → R (c ◦ c′)(x, z) = inf y∈Y c(x, y) + c′(y, z). The composition of optimal transport costs can be defined in two equivalent ways: (C ◦ C ′)(PX , PZ) = inf PY C(PX , PY ) + C ′(PY , PZ) = inf PXZ E[(c ◦ c′)(X,Z)] Total variation distance The total variation distance between distributions P and Q is CTV(P,Q) = sup A P (A)−Q(A). (3) We use this notation because it is the optimal transport cost for the cost function cTV : X × X → R, cTV(x, x ′) = 1[x 6= x′]. Observe that (3) is equivalent to (2) with the additional restrictions that f(x) ∈ {0, 1} for all x, i.e. f is an indicator function for some set A and g = f cTV . For binary classification with a symmetric prior on the classes, a set A that achieves the optimum in Eq. (3) corresponds to an optimal test for distinguishing P from Q. 3.2 Adversarial cost functions and couplings We now construct specialized version of costs and couplings that translate between robust classification and optimal transport. Cost functions for adversarial classification The adversarial constraint information N can be encoded into the following cost function cN : X × X̃ → R: cN (x, x̃) = 1[x̃ 6∈ N(x)]. The composition of cN and c>N (i.e. cN with the arguments flipped) has simple combinatorial interpretation: (cN ◦ c>N )(x, x′) = 1[N(x) ∩N(x′) = ∅]. Perhaps the most well-known example of optimal transport is the earth-mover’s or 1-Wasserstein distance, where the cost function is a metric on the underlying space. In general, the transportation cost cN ◦ c>N is not a metric on X because (cN ◦ c>N )(x, x′) = 0 does not necessarily imply x = x′. However, when (cN ◦ c>N )(x, x′) = 0, we say that the points are adversarially indistinguishible. Couplings from adversarial strategies Let a : X → X̃ be a function such that a(x) ∈ N(x) for all x ∈ X . Then a is an admissible adversarial perturbation strategy. The adversarial expected risk can be expressed as a maximization over adversarial strategies: L(N,P, h) = supa1,a−1 E(x,c)∼P [`(h(ac(x)), c)]. Let X̃1 = a1(X1), so a1 gives a coupling PX1X̃1 between PX1 and PX̃1 . By construction, CN (PX1 , PX̃1) = 0. A general coupling between PX1 and PX̃1 with CN (PX1 , PX̃1) = 0 corresponds to a randomized adversarial strategy. We define PX̃−1 and PX−1X̃−1 analogously. By composing the adversarial strategy coupling PX1X̃1 , the total variation coupling of PX̃1 and PX̃−1 , and PX̃−1X−1 , we obtain a coupling PX1X−1 . Potential functions from classifiers Now we can explore the relationship between transport and classification. Consider a given hypothesis h : X̃ → {−1, 1}. A labeled adversarial example (x̃, y) is classified correctly if x̃ ∈ h−1(y). A labeled example (x, y) is classified correctly if N(x) ⊆ h−1(y). Following Cullina et al. [23], we define degraded hypotheses h̃ : X → {−1, 1,⊥}, h̃(x) = { y : N(x) ⊆ h−1(y) ⊥ : otherwise. This allows us to express the adversarial classification accuracy of h, 1− L(N,h, P ), as 1 2 (E[1[h̃(X1) = 1]] + E[1[h̃(X−1) = −1]]). Observe that 1[h̃(x) = 1] + 1[h̃(x′) = −1] ≤ (cN ◦ c>N )(x, x′) + 1. Thus the functions f(x) = 1 − 1[h̃(x) = 1] and g(x) = 1[h̃(x) = −1] are admissible potentials for cN ◦ c>N . This is illustrated in Figure 1. Our first theorem characterizes optimal adversarial robustness when h is allowed to be any classifier. Theorem 1. Let X and X̃ be Polish spaces and let N : X → 2X̃ be an upper-hemicontinuous neighborhood function such that N(x) is nonempty and closed for all x. For any pair of distributions PX1 ,PX−1 on X , (CN ◦ C>N )(PX1 , PX−1) = 1− 2 inf h L(N,h, P ) where h : X̃ → {1,−1} can be any measurable function. Furthermore there is some h that achieves the infimum. In the case of finite spaces, this theorem is essentially equivalent to the König-Egerváry theorem on size of a maximum matching in a bipartite graph. The full proof is in Section A of the Supplementary. If instead of all measurable functions, we consider h ∈ H, a smaller hypothesis class, Theorem 1 provides a lower bound on infh∈H L(N,h, P ). 4 Gaussian data: Optimal loss In this section, we consider the case when the data is generated from a mixture of two Gaussians with identical covariances and means that differ in sign. Directly applying (1) or (2), requires optimizing over either all classifiers or all transportation plans. However, a classifier and a coupling that achieve the same cost must both be optimal. We use this to show that optimizing over linear classifiers and ‘translate and pair’ transportation plans characterizes adversarial robustness in this case. Problem setup: Consider a labeled example (X,Y ) ∈ Rd × {−1, 1} such that the example X has a Gaussian conditional distribution, X|(Y = y) ∼ N (yµ,Σ), and Pr(Y = 1) = Pr(Y = −1) = 12 . Let B ⊆ Rd be a closed, convex, absorbing, origin-symmetric set. The adversary is constrained to add perturbations to a data point x contained within βB, where β is an adversarial budget parameter. That is, for all x, N(x) = x + βB. This includes `p-constrained adversaries as the special case B = {z : ‖z‖p ≤ 1}. For N and P of this form, we will determine infh L(N,P, h) where h can be any measurable function. We first define the following convex optimization problem in order to state Theorem 2. In the proof of Theorem 2, it will become clear how it arises. Definition 3. Let α∗(β, µ) be the solution to the following convex optimization problem: (z, y, α) ∈ Rd+d+1 minα s.t. ‖y‖Σ ≤ α ‖z‖B ≤ β z + y = µ (4) where we use the seminorms ‖y‖Σ = √ y>Σ−1y and ‖z‖B = inf{β : z ∈ βB}. Theorem 2. Let N(x) = x + βB. Then (CN ◦ C>N )(N (µ,Σ),N (−µ,Σ)) = 1 − 2Q(α∗(β, µ)), where Q is the complementary cumulative distribution function for N (0, 1). The crucial properties of the solution to (4) are characterized in the following lemma. Lemma 1. Let µ ∈ Rd, β ≥ 0, and α = α∗(β, x). There are y, z, w ∈ Rd such that y + z = µ and ‖y‖Σ = α ‖z‖B = β ‖w‖Σ∗ = 1 ‖w‖B∗ = γ w>y = α w>z = βγ. The proof of Lemma 1 is in Section B.1 of the Supplementary. Proof of Theorem 2. We start from the definition of optimal transport cost and consider the restricted class of “translate and pair in place” couplings to get an upper bound. In these couplings, the adversarial attacks are translations by a constant: X̃1 = X1 + z and X̃−1 = X−1 − z. The total variation coupling between X̃1 and X̃−1 does “pairing in place”. (CN ◦ C>N )(PX1 , PX−1) ≤ inf z∈βB CTV (PX̃1 , PX̃−1) = infz∈βB sup w 2Q ( wᵀz − wᵀµ√ wᵀΣw ) − 1. The full computation of the total variation between Gaussians is in Section B.2 of the Supplementary.. The infimum is attained at w∗ = 2Σ−1(z − µ) and its value is √ (z − µ)ᵀΣ−1(z − µ). The choice of z from Lemma 1 makes the upper bound 2Q(−α∗(β, µ))− 1 = 1− 2Q(α∗(β, µ)). Now we consider the lower bounds on optimal transport cost from linear classification functions of the form fw(x) = sgn (wᵀx). In the presence of an adversary, the classification problem becomes maxw P(x,y)∼P [fw(x+ aw,y(x)) = y] . When y = 1, the correct classification event is fw(x+ aw,1(x)) = 1, or equivalently wᵀx− β‖w‖B∗ > 0. This ultimately gives the lower bound (CN ◦ C>N )(PX1 , PX−1) ≥ sup w 1− 2Q ( β‖w‖B∗ − wᵀµ ‖w‖Σ∗ ) . (5) The full calculation appears in the supplementary material (Section B.3). From Lemma 1, there is a choice of w that makes the bound in (5) equal to 1− 2Q(α∗(β, µ)). The proof of Theorem 2 shows that linear classifiers are optimal for this problem. The choice of w provided by Lemma 1 specifies the orientation of the optimal classifier. 4.1 Special cases Matching norms for data and adversary: When B is the unit ball derived from Σ, the optimization problem (4) has a very simple solution: α∗(β, µ) = ‖µ‖Σ−β, y = αµ, z = βµ, andw = 1‖µ‖Σ Σ −1µ. Thus, the same classifier is optimal for all adversarial budgets. In general, α∗(0, µ) = ‖µ‖Σ and α∗(‖µ‖B, µ) = 0, but α∗(β, µ) can be nontrivially convex for 0 ≤ β ≤ ‖µ‖B. When there is a difference between the two seminorms, the optimal modification is not proportional to µ, which can be used by the adversary. The optimal classifier varies with the adversarial budget, so there is a trade-off between accuracy and robust accuracy. `∞ adversaries: In Figure 2, we illustrate this phenomenon for an `∞ adversary. We plot α(β, µ) for Σ = I (so ‖ · ‖Σ = ‖ · ‖2) and taking B to be the `∞ unit ball (so ‖ · ‖B = ‖ · ‖∞). In this case (4) has an explicit solution. For each coordinate zi, set zi = min(µi, β), which gives yi = µi −min(µi, β), which makes the constraints tight. Thus, as β increases, more components of z equal those of µ, reducing the marginal effect of an additional increase in β. Due to the mismatch between the seminorms governing the data and adversary, the value of β determines which features are useful for classification, since features less than β can be completely erased. Without an adversary, all of these features would be potentially useful for classification, implying that human-imposed adversarial constraints, with their mismatch from the underlying geometry of the data distribution, lead to the presence of non-robust features that are nevertheless useful for classification. A similar observation was made in concurrent work by Ilyas et al. [39]. 5 Gaussian data: Sample complexity lower bound In this section, we use the characterization of the optimal loss in the Gaussian robust classification problem to establish the optimality of a rule for learning from a finite number of samples. This allows for precise characterization of sample complexity in the learning problem. Consider the following Bayesian learning problem, which generalizes a problem considered by Schmidt et al. [68]. We start from the classification problem defined in Section 4. There, the choice of the classifier h could directly depend on µ and Σ. Now we give µ the distribution N (0, 1mI). A learner who knows this prior but not the value of µ is provided with n i.i.d. labeled training examples samples. The learner selects any measurable classification function ĥn : Rd → {−1, 1} by applying some learning algorithm to the training data with the goal of minimizing E[L(N,P, ĥn)]. The optimal transport approach allows us to determine the exact optimal loss for this problem for each n as well as the optimal learning algorithm. To characterize this loss, we need the following definitions. Let A be the `2 unit ball: {y ∈ Rd : ‖y‖2 ≤ 1}. Let S(α, β) = {(x, t) ∈ Rd × R : x ∈ tαA+ βB}. Theorem 3. In the learning problem described above, the minimum loss of any learning rule is PrV∼N (0,I) [V ∈ S(ρ, βρ)], where ρ2 = m(m+n)n . The proof is in Section C of the Supplementary. The special case where B is an `∞ ball was considered by Schmidt et al. [68]. They obtained a lower bound on loss that can be expressed in our notation as Pr[V ∈ S(0, ρβ)]. This bound essentially ignores the random noise in the problem and computes the probability that after seeing n training examples, the posterior distributions for Xn+1|(Yn+1 = 1) and Xn+1|(Yn+1 = −1) are adversarially i distinguishable. The true optimal loss takes into account the intermediate case in which these posterior distributions are difficult but not impossible to distinguish in the presence of an adversary. Schmidt et al. investigate sample complexity in the following parameter regime: m = c1d 1 2 which by design is a low noise regime. In this regime, they establish upper and lower bounds on sample complexity of learning an adversarially robust classifier: C β 2d log d ≤ n ≤ C ′β2d. By taking into account the effect of the random noise, our characterization of the loss loses this gap. For larger values of m, the difference between Pr[Y ∈ S(0, ρβ)] and Pr[Y ∈ S(ρ, ρβ)] becomes more significant, so our analysis is useful over a much broader range of parameters. 6 Experimental Results In this section, we use Theorem 1 to find lower bounds on adversarial robustness for empirical datasets of interest. We also compare these bounds to the performance of robustly trained classifiers on adversarial examples and find a gap for larger perturbation values. For reproducibility purposes, our code is available at https://github.com/inspire-group/robustness-via-transport. 6.1 Experimental Setup We consider the adversarial classification problem on three widely used image datasets, namely MNIST [50], Fashion-MNIST [82] and CIFAR-10 [47], and obtain lower bounds on the adversarial robustness for any classifier for these datasets. For each dataset, we use data from classes 3 (PX1) and 7 (PX−1 ) to obtain a binary classification problem. This choice is arbitrary and similar results are obtained with other choices, which we omit for brevity. We use 2000 images from the training set of each class to compute the lower bound on adversarial robustness when the adversary is constrained using the `2 norm. For the `∞ norm, these pairs of classes are very well separated, making the lower bounds less interesting (results in Section D of the Supplementary). For the MNIST and Fashion MNIST dataset, we compare the lower bound with the performance of a 3-layer Convolutional Neural Network (CNN) that is robustly trained using iterative adversarial training [54] with the Adam optimizer [43] for 12 epochs. This network achieves 99.9% accuracy on the ‘3 vs. 7’ binary classification task on both MNIST and Fashion-MNIST. For the CIFAR-10 dataset, we use a ResNet-18 [36] trained for 200 epochs, which achieves 97% accuracy on the binary classification task. To generate adversarial examples both during the training process and to test robustness, we use Projected Gradient Descent (PGD) with an `2 constraint, random initialization and a minimum of 10 iterations. Since more powerful heuristic attacks may be possible against these robustly trained classifiers, the ‘robust classifier loss’ reported here is a lower bound. 6.2 Lower bounds on adversarial robustness for empirical distributions Now, we describe the steps we follow to obtain a lower bound on adversarial robustness for empirical distributions through a direct application of Theorem 1. We first create a k × k matrix D whose entries are ‖xi − xj‖p, where k is the number of samples from each class and p defines the norm. Now, we threshold these entries to obtain Dthresh, the matrix of adversarial costs (cN ◦ c>N )(xi, xj) (recall Section 3.2), whose (i, j)th entry is 1 if Dij > 2β and 0 otherwise, where β is the constraint on the adversary. Finally, optimal coupling cost (CN ◦ C>N )(PX1 , PX−1) is computed by performing minimum weight matching over the bipartite graph defined by the cost matrix Dthresh using the Linear Sum Assignment module from Scipy [41]. In Figure 4, we show the variation in the minimum possible 0− 1 loss (adversarial robustness) in the presence of an `2 constrained adversary as the attack budget β is increased. We compare this loss value to that of a robustly trained classifier [54] when the PGD attack is used (on the same data). Until a certain β value, robust training converges and the model attains a non-trivial adversarial robustness value. Nevertheless, there is a gap between the empirically obtained and theoretically predicted minimum loss values. Further, after β = 3.8 (MNIST), β = 4.8 (Fashion MNIST) and β = 1.5, we observe that robust training is unable to converge. We believe this occurs as a large fraction of the data at that value of β is close to the boundary when adversarially perturbed, making the classification problem very challenging. We note that in order to reduce the classification accuracy to random for CIFAR-10, a much larger `2 budget is needed compared to either MNIST or Fashion-MNIST, implying that the classes are better separated. 7 Related work and Concluding Remarks We only discuss the closest related work that analyzes evasion attacks theoretically. Extensive recent surveys [10, 51, 64] provide a broader overview. Distribution-specific generalization analysis: Schimdt et al. [68] studied the sample complexity of learning a mixture of Gaussians as well as Bernoulli distributed data in the presence of `∞-bounded adversaries, which we recover as a special case of our framework in 5. Gilmer et al. [33] and Diochnos et al. [26] analyzed the robustness of classifiers for specific distributions, i.e. points distributed on two concentric spheres and points on the Boolean hypercube respectively. In contrast to these papers, our framework applies for any binary classification problem as our lower bound applies to arbitrary distributions. Sample complexity in the PAC setting: Cullina et al. [23], Yin et al. [85] and Montasser et al. [56] derive the sample complexity needed to PAC-learn a hypothesis class in the presence of an evasion adversary. These approaches do not provide an analysis of the optimal loss under a given distribution, but only of the number of samples needed to get -close to it, i.e. to learn the best empirical hypothesis. Optimal transport for bounds on adversarial robustness: Sinha et al. [73] constrain the adversary using a Wasserstein distance bound on the distribution that results from perturbing the benign distribution and study the sample complexity of SGD for minimizing the relaxed Lagrangian formulation of the learning problem with this constraint. In contrast, we use a cost function that characterizes sample-wise adversarial perturbation exactly, which aligns with current practice and provide a lower bound on the 0− 1 loss with an adversary, while Sinha et al. minimize an upper bound to perform robust training. Mahloujifar et al. [55] and Dohmatob [27] use the ‘blowup’ property exhibited by certain data distributions to provide bounds on adversarial risk, given some level of ordinary risk. In comparison, our assumptions on the example space, distribution, and adversarial constraints are much milder. Even in regimes where these frameworks are applicable, our approach provides two key advantages. First, our bounds explicitly concern the adversarial robustness of the optimal classifier, while theirs relate the adversarial robustness to the benign classification error of a classifier. Thus, our bounds can still be nontrivial even when there is a classifier with a benign classification error of zero, which is exactly the case in our MNIST experiments. Second, our bounds apply for any adversarial budget while theirs become non-trivial only when the adversarial budget exceeds a critical threshold depending on the properties of the space. Possibility of robust classification: Bubeck et al. [13] show that there exist classification tasks in the statistical query model for which there is no efficient algorithm to learn robust classifiers. Tsipras et al. [79], Zhang et al. [87] and Suggala et al. [76] study the trade-offs between robustness and accuracy. We discuss this trade-off for Gaussian data in Section 4. 7.1 Concluding remarks Our framework provides lower bounds on adversarial robustness through the use of optimal transport for binary classification problems, which we apply to empirical datasets of interest to analyze the performance of current defenses. In future work, we will extend our framework to the multi-class classification setting. As a special case, we also characterize the learning problem exactly in the case of Gaussian data and study the relationship between noise in the learning problem and adversarial perturbations. Recent work [30, 32] has established an empirical connection between these two noise regimes and an interesting direction would be to precisely characterize which type of noise dominates the learning process for a given adversarial budget. Another natural next step would be to consider distributions beyond the Gaussian to derive expressions for optimal adversarial robustness as well as the sample complexity of attaining it. Acknowledgements We would like to thank Chawin Sitawarin for providing part of the code used in our experiments. This research was sponsored by the National Science Foundation under grants CNS-1553437, CNS1704105, CIF-1617286 and EARS-1642962, by Intel through the Intel Faculty Research Award, by the Office of Naval Research through the Young Investigator Program (YIP) Award, by the Army Research Office through the Young Investigator Program (YIP) Award and a Schmidt DataX Award. ANB would like to thank Siemens for supporting him through the FutureMakers Fellowship.
1. What is the main contribution of the paper in terms of its setting and results? 2. How does the reviewer assess the simplicity or lack thereof of the paper's technical approach? 3. What impact does the choice of setting have on the paper's overall novelty and significance? 4. Are there any concerns regarding the sufficiency of the proposed method for optimizing classifiers and transportation plans? 5. How clear and well-communicated are the main findings and messages of the paper?
Review
Review The setting of binary classification with class-conditional data being Gaussians with identical covariance matrices gives interesting results but is too simple in terms of any novel technical challenges. Their setting ensures (needs a short proof) that optimizing over only linear classifiers and translate-and-pair transportation plans or couplings is sufficient to lower bound the adversarial robustness. It helps improve the clarity of their main message but brings the submission down a little bit on originality and significance metrics, in my opinion.
NIPS
Title Lower Bounds on Adversarial Robustness from Optimal Transport Abstract While progress has been made in understanding the robustness of machine learning classifiers to test-time adversaries (evasion attacks), fundamental questions remain unresolved. In this paper, we use optimal transport to characterize the minimum possible loss in an adversarial classification scenario. In this setting, an adversary receives a random labeled example from one of two classes, perturbs the example subject to a neighborhood constraint, and presents the modified example to the classifier. We define an appropriate cost function such that the minimum transportation cost between the distributions of the two classes determines the minimum 0− 1 loss for any classifier. When the classifier comes from a restricted hypothesis class, the optimal transportation cost provides a lower bound. We apply our framework to the case of Gaussian data with norm-bounded adversaries and explicitly show matching bounds for the classification and transport problems as well as the optimality of linear classifiers. We also characterize the sample complexity of learning in this setting, deriving and extending previously known results as a special case. Finally, we use our framework to study the gap between the optimal classification performance possible and that currently achieved by state-of-the-art robustly trained neural networks for datasets of interest, namely, MNIST, Fashion MNIST and CIFAR-10. 1 Introduction Machine learning (ML) has become ubiquitous due to its impressive performance in a wide variety of domains such as image recognition [48,72], natural language and speech processing [22,25,37], gameplaying [12,59,71] and aircraft collision avoidance [42]. This ubiquity, however, provides adversaries with both the opportunity and incentive to strategically fool machine learning systems during both the training (poisoning attacks) [5, 9, 40, 60, 67] and test (evasion attacks) [8, 17, 34, 57, 58, 63, 77] phases. In an evasion attack, an adversary adds imperceptible perturbations to inputs in the test phase to cause misclassification. A large number of adversarial example-based evasion attacks have been proposed against ML algorithms used for tasks such as image classification [8, 17, 19, 34, 63, 77], object detection [21, 53, 83], image segmentation [2, 31] and speech recognition [18, 86]; generative ∗Equal contribution. †Work done while at Princeton University 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. models for image data [45] and even reinforcement learning algorithms [38, 46]. These attacks have been carried out in black-box [7, 11, 20, 52, 61, 62, 77] as well as in physical settings [29, 49, 70, 74]. A wide variety of defenses based on adversarial training [34, 54, 78], input de-noising through transformations [6, 24, 28, 69, 84], distillation [65], ensembling [1, 4, 75] and feature nullification [81] were proposed to defend ML algorithms against evasion attacks, only for most to be rendered ineffective by stronger attacks [3, 14–16]. Iterative adversarial training [54] is a current state-of-theart empirical defense. Recently, defenses that rely on adversarial training and are provably robust to small perturbations have been proposed [35, 44, 66, 73] but are unable to achieve good generalization behavior on standard datasets such as CIFAR-10 [47]. In spite of an active line of research that has worked to characterize the difficulty of learning in the presence of evasion adversaries by analyzing the sample complexity of learning classifiers for known distributions [68] as well as in the distributionfree setting [23, 56, 85], fundamental questions remain unresolved. One such question is, what is the behavior of the optimal achievable loss in the presence of an adversary? In this paper, we derive bounds on the 0−1 loss of classifiers while classifying adversarially modified data at test time, which is often referred to as adversarial robustness. We first develop a framework that relates classification in the presence of an adversary and optimal transport with an appropriately defined adversarial cost function. For an arbitrary data distribution with two classes, we characterize optimal adversarial robustness in terms of the transportation distance between the classes. When the classifier comes from a restricted hypothesis class, we obtain a lower bound on the minimum possible 0− 1 loss (or equivalently, an upper bound on the maximum possible classification accuracy). We then consider the case of a mixture of two Gaussians and derive matching upper and lower bounds for adversarial robustness by framing it as a convex optimization problem and proving the optimality of linear classifiers. For an `∞ adversary, we also present the explicit solution for this optimization problem and analyze its properties. Further, we derive an expression for sample complexity with the assumption of a Gaussian prior on the mean of the Gaussians which allows us to independently match and extend the results from Schmidt et al. [68] as a special case. Finally, in our experiments, we find transportation costs between the classes of empirical distributions of interest such as MNIST [50], Fashion-MNIST [82] and CIFAR-10 [47] for adversaries bounded by `2 and `∞ distance constraints, and relate them to the classification loss of state-of-the-art robust classifiers. Our results demonstrate that as the adversarial budget increases, the gap between current robust classifiers and the lower bound increases. This effect is especially pronounced for the CIFAR10 dataset, providing a clear indication of the difficulty of robust classification for this dataset. What do these results imply? First, the effectiveness of any defense for a given dataset can be directly analyzed by comparing its robustness to the lower bound. In particular, this allows us to identify regimes of interest where robust classification is possible. Our bound can be used to decide whether a particular adversarial budget is big or small. Second, since our lower bound does not require any distributional assumptions on the data, we are able to directly apply it to empirical distributions, characterizing whether robust classification is possible. Further, in the Gaussian setting, the optimal classifier in the adversarial case depends explicitly on the adversary’s budget. The optimal classifier in the benign case (corresponding to a budget of 0), differs from that for non-zero budgets. This immediately establishes a trade-off between the benign accuracy and adversarial robustness achievable with a given classifier. This raises interesting questions about which classifier should actually be deployed and how large the trade-off is. From the explicit solution we derive in the Gaussian setting, we observe that non-robust features occur during classification due to a mismatch between the norms used by the adversary and that governing the data distribution. We expand upon this observation in Section 4.1, which was also made independently by Ilyas et al. [39]. Contributions: We summarize our contributions in this paper as follows: i) we develop a framework for finding general lower bounds for classification error in the presence of an adversary (adversarial robustness) using optimal transport, ii) we show matching upper and lower bounds for adversarial robustness as well as the sample complexity of attaining it for the case of Gaussian data and a convex, origin-symmetric constraint on the adversary and iii) we determine lower bounds on adversarial robustness for empirical datasets of interest and compare them to those of robustly trained classifiers. 2 Preliminaries and Notation In this section, we set up the problem of learning in the presence of an evasion adversary. Such an adversary presents the learner with adversarially modified examples at test time but does not interfere with the training process [17, 34, 77]. We also define notation for the rest of the paper and explain how other work on adversarial examples fits into our setting. We summarize the basic notation in Table 1. We now formally describe the learning problem. There is an unknown P ∈ P(X × {−1, 1}). The learner receives labeled training data (x,y) = ((x0, y0), . . . , (xn−1, yn−1)) ∼ Pn and must select a hypothesis h. The evasion adversary receives a labeled natural example (xTest, yTest) ∼ P and selects x̃ ∈ N(xTest), the set of adversarial examples in the neighborhood of xTest. The adversary gives x̃ to the learner and the learner must estimate yTest. Their performance is measured by the 0-1 loss, `(yTest, h(x̃)). Examples produced by the adversary are elements of a space X̃ . In most applications, X = X̃ , but we find it useful to distinguish them to clarify some definitions. We require N(x) to be nonempty so some choice of x̃ is always available. By taking X = X̃ and N(x) = {x}, we recover the standard problem of learning without an adversary. If N1, N2 are neighborhood functions and N1(x) ⊆ N2(x) for all x ∈ X , N2 represents a stronger adversary. When X = X̃ , a neighborhood function N can be defined using a distance d on X and an adversarial constraint β: N(x) = {x̃ : d(x, x̃) ≤ β}. This provides an ordered family of adversaries of varying strengths used in previous work [17, 34, 68]. The learner’s error rate under the data distribution P with an adversary constrained by the neighborhood function N is L(N,P, h) = E(x,y)∼P [maxx̃∈N(x) `(h(x̃), y)]. 3 Adversarial Robustness from Optimal transport In this section, we explain the connections between adversarially robust classification and optimal transport. At a high level, these arise from the following idea: if a pair of examples, one from each class, are adversarially indistinguishable, then any hypothesis can classify at most one of the examples correctly, By finding families of such pairs, one can obtain lower bounds on classification error rate. When the set of available hypotheses is as large as possible, the best of these lower bounds is tight. Section Roadmap: We will first review some basic concepts from optimal transport theory [80]. Then, we will define a cost function for adversarial classification as well as its associated potential functions that are needed to establish Kantorovich duality. We show how a coupling between the conditional distributions of the two classes can be obtained by composing couplings derived from the adversarial strategy and the total variation distance, which links hypothesis testing and transportation costs. Finally, we show that the potential functions have an interpretation in terms of classification, which leads to our theorem connecting adversarial robustness to the optimal transport cost. 3.1 Basic definitions from optimal transport In this section, we use capital letters for random variables and lowercase letters for points in spaces. Couplings A coupling between probability distributions PX on X and PY on Y is a joint distribution on X × Y with marginals PX and PY . Let Π(PX , PY ) be the set of such couplings. Definition 1 (Optimal transport cost). For a cost function c : X × Y → R ∪ {+∞} and marginal distributions PX and PY , the optimal transport cost is C(PX , PY ) = inf PXY ∈Π(PX ,PY ) E(X,Y )∼PXY [c(X,Y )]. (1) Potential functions and Kantorovich duality There is a dual characterization of optimal transport cost in terms of potential functions which we use to make the connection between the transport and classification problems. Definition 2 (Potential functions). Functions f : X → R and g : Y → R are potential functions for the cost c if g(y)− f(x) ≤ c(x, y) for all (x, y) ∈ X × Y . A pair of potential functions provide a one-dimensional representation of the spaces X and Y . This representation must be be faithful to the cost structure on the original spaces: if a pair of points (x, y) are close in transportation cost, then f(x) must be close to g(y). In the dual optimization problem for optimal transport cost, we search for a representation that separates PX from PY as much as possible: C(PX , PY ) = sup f,g EY∼PY [g(Y )]− EX∼PX [f(X)]. (2) For any choices of f , g, and PXY , it is clear that E[g(Y )]− E[f(X)] ≤ E[c(X,Y )]. Kantorovich duality states that there are in fact choices for f and g that attain equality. Define the dual of f relative to c to be f c(y) = infx c(x, y) + f(x). This is the largest function that forms a potential for c when paired with with f . In (2), it is sufficient to optimize over pairs (f, f c). Compositions The composition of cost functions c : X × Y → R and c′ : Y × Z → R is (c ◦ c′) : X × Z → R (c ◦ c′)(x, z) = inf y∈Y c(x, y) + c′(y, z). The composition of optimal transport costs can be defined in two equivalent ways: (C ◦ C ′)(PX , PZ) = inf PY C(PX , PY ) + C ′(PY , PZ) = inf PXZ E[(c ◦ c′)(X,Z)] Total variation distance The total variation distance between distributions P and Q is CTV(P,Q) = sup A P (A)−Q(A). (3) We use this notation because it is the optimal transport cost for the cost function cTV : X × X → R, cTV(x, x ′) = 1[x 6= x′]. Observe that (3) is equivalent to (2) with the additional restrictions that f(x) ∈ {0, 1} for all x, i.e. f is an indicator function for some set A and g = f cTV . For binary classification with a symmetric prior on the classes, a set A that achieves the optimum in Eq. (3) corresponds to an optimal test for distinguishing P from Q. 3.2 Adversarial cost functions and couplings We now construct specialized version of costs and couplings that translate between robust classification and optimal transport. Cost functions for adversarial classification The adversarial constraint information N can be encoded into the following cost function cN : X × X̃ → R: cN (x, x̃) = 1[x̃ 6∈ N(x)]. The composition of cN and c>N (i.e. cN with the arguments flipped) has simple combinatorial interpretation: (cN ◦ c>N )(x, x′) = 1[N(x) ∩N(x′) = ∅]. Perhaps the most well-known example of optimal transport is the earth-mover’s or 1-Wasserstein distance, where the cost function is a metric on the underlying space. In general, the transportation cost cN ◦ c>N is not a metric on X because (cN ◦ c>N )(x, x′) = 0 does not necessarily imply x = x′. However, when (cN ◦ c>N )(x, x′) = 0, we say that the points are adversarially indistinguishible. Couplings from adversarial strategies Let a : X → X̃ be a function such that a(x) ∈ N(x) for all x ∈ X . Then a is an admissible adversarial perturbation strategy. The adversarial expected risk can be expressed as a maximization over adversarial strategies: L(N,P, h) = supa1,a−1 E(x,c)∼P [`(h(ac(x)), c)]. Let X̃1 = a1(X1), so a1 gives a coupling PX1X̃1 between PX1 and PX̃1 . By construction, CN (PX1 , PX̃1) = 0. A general coupling between PX1 and PX̃1 with CN (PX1 , PX̃1) = 0 corresponds to a randomized adversarial strategy. We define PX̃−1 and PX−1X̃−1 analogously. By composing the adversarial strategy coupling PX1X̃1 , the total variation coupling of PX̃1 and PX̃−1 , and PX̃−1X−1 , we obtain a coupling PX1X−1 . Potential functions from classifiers Now we can explore the relationship between transport and classification. Consider a given hypothesis h : X̃ → {−1, 1}. A labeled adversarial example (x̃, y) is classified correctly if x̃ ∈ h−1(y). A labeled example (x, y) is classified correctly if N(x) ⊆ h−1(y). Following Cullina et al. [23], we define degraded hypotheses h̃ : X → {−1, 1,⊥}, h̃(x) = { y : N(x) ⊆ h−1(y) ⊥ : otherwise. This allows us to express the adversarial classification accuracy of h, 1− L(N,h, P ), as 1 2 (E[1[h̃(X1) = 1]] + E[1[h̃(X−1) = −1]]). Observe that 1[h̃(x) = 1] + 1[h̃(x′) = −1] ≤ (cN ◦ c>N )(x, x′) + 1. Thus the functions f(x) = 1 − 1[h̃(x) = 1] and g(x) = 1[h̃(x) = −1] are admissible potentials for cN ◦ c>N . This is illustrated in Figure 1. Our first theorem characterizes optimal adversarial robustness when h is allowed to be any classifier. Theorem 1. Let X and X̃ be Polish spaces and let N : X → 2X̃ be an upper-hemicontinuous neighborhood function such that N(x) is nonempty and closed for all x. For any pair of distributions PX1 ,PX−1 on X , (CN ◦ C>N )(PX1 , PX−1) = 1− 2 inf h L(N,h, P ) where h : X̃ → {1,−1} can be any measurable function. Furthermore there is some h that achieves the infimum. In the case of finite spaces, this theorem is essentially equivalent to the König-Egerváry theorem on size of a maximum matching in a bipartite graph. The full proof is in Section A of the Supplementary. If instead of all measurable functions, we consider h ∈ H, a smaller hypothesis class, Theorem 1 provides a lower bound on infh∈H L(N,h, P ). 4 Gaussian data: Optimal loss In this section, we consider the case when the data is generated from a mixture of two Gaussians with identical covariances and means that differ in sign. Directly applying (1) or (2), requires optimizing over either all classifiers or all transportation plans. However, a classifier and a coupling that achieve the same cost must both be optimal. We use this to show that optimizing over linear classifiers and ‘translate and pair’ transportation plans characterizes adversarial robustness in this case. Problem setup: Consider a labeled example (X,Y ) ∈ Rd × {−1, 1} such that the example X has a Gaussian conditional distribution, X|(Y = y) ∼ N (yµ,Σ), and Pr(Y = 1) = Pr(Y = −1) = 12 . Let B ⊆ Rd be a closed, convex, absorbing, origin-symmetric set. The adversary is constrained to add perturbations to a data point x contained within βB, where β is an adversarial budget parameter. That is, for all x, N(x) = x + βB. This includes `p-constrained adversaries as the special case B = {z : ‖z‖p ≤ 1}. For N and P of this form, we will determine infh L(N,P, h) where h can be any measurable function. We first define the following convex optimization problem in order to state Theorem 2. In the proof of Theorem 2, it will become clear how it arises. Definition 3. Let α∗(β, µ) be the solution to the following convex optimization problem: (z, y, α) ∈ Rd+d+1 minα s.t. ‖y‖Σ ≤ α ‖z‖B ≤ β z + y = µ (4) where we use the seminorms ‖y‖Σ = √ y>Σ−1y and ‖z‖B = inf{β : z ∈ βB}. Theorem 2. Let N(x) = x + βB. Then (CN ◦ C>N )(N (µ,Σ),N (−µ,Σ)) = 1 − 2Q(α∗(β, µ)), where Q is the complementary cumulative distribution function for N (0, 1). The crucial properties of the solution to (4) are characterized in the following lemma. Lemma 1. Let µ ∈ Rd, β ≥ 0, and α = α∗(β, x). There are y, z, w ∈ Rd such that y + z = µ and ‖y‖Σ = α ‖z‖B = β ‖w‖Σ∗ = 1 ‖w‖B∗ = γ w>y = α w>z = βγ. The proof of Lemma 1 is in Section B.1 of the Supplementary. Proof of Theorem 2. We start from the definition of optimal transport cost and consider the restricted class of “translate and pair in place” couplings to get an upper bound. In these couplings, the adversarial attacks are translations by a constant: X̃1 = X1 + z and X̃−1 = X−1 − z. The total variation coupling between X̃1 and X̃−1 does “pairing in place”. (CN ◦ C>N )(PX1 , PX−1) ≤ inf z∈βB CTV (PX̃1 , PX̃−1) = infz∈βB sup w 2Q ( wᵀz − wᵀµ√ wᵀΣw ) − 1. The full computation of the total variation between Gaussians is in Section B.2 of the Supplementary.. The infimum is attained at w∗ = 2Σ−1(z − µ) and its value is √ (z − µ)ᵀΣ−1(z − µ). The choice of z from Lemma 1 makes the upper bound 2Q(−α∗(β, µ))− 1 = 1− 2Q(α∗(β, µ)). Now we consider the lower bounds on optimal transport cost from linear classification functions of the form fw(x) = sgn (wᵀx). In the presence of an adversary, the classification problem becomes maxw P(x,y)∼P [fw(x+ aw,y(x)) = y] . When y = 1, the correct classification event is fw(x+ aw,1(x)) = 1, or equivalently wᵀx− β‖w‖B∗ > 0. This ultimately gives the lower bound (CN ◦ C>N )(PX1 , PX−1) ≥ sup w 1− 2Q ( β‖w‖B∗ − wᵀµ ‖w‖Σ∗ ) . (5) The full calculation appears in the supplementary material (Section B.3). From Lemma 1, there is a choice of w that makes the bound in (5) equal to 1− 2Q(α∗(β, µ)). The proof of Theorem 2 shows that linear classifiers are optimal for this problem. The choice of w provided by Lemma 1 specifies the orientation of the optimal classifier. 4.1 Special cases Matching norms for data and adversary: When B is the unit ball derived from Σ, the optimization problem (4) has a very simple solution: α∗(β, µ) = ‖µ‖Σ−β, y = αµ, z = βµ, andw = 1‖µ‖Σ Σ −1µ. Thus, the same classifier is optimal for all adversarial budgets. In general, α∗(0, µ) = ‖µ‖Σ and α∗(‖µ‖B, µ) = 0, but α∗(β, µ) can be nontrivially convex for 0 ≤ β ≤ ‖µ‖B. When there is a difference between the two seminorms, the optimal modification is not proportional to µ, which can be used by the adversary. The optimal classifier varies with the adversarial budget, so there is a trade-off between accuracy and robust accuracy. `∞ adversaries: In Figure 2, we illustrate this phenomenon for an `∞ adversary. We plot α(β, µ) for Σ = I (so ‖ · ‖Σ = ‖ · ‖2) and taking B to be the `∞ unit ball (so ‖ · ‖B = ‖ · ‖∞). In this case (4) has an explicit solution. For each coordinate zi, set zi = min(µi, β), which gives yi = µi −min(µi, β), which makes the constraints tight. Thus, as β increases, more components of z equal those of µ, reducing the marginal effect of an additional increase in β. Due to the mismatch between the seminorms governing the data and adversary, the value of β determines which features are useful for classification, since features less than β can be completely erased. Without an adversary, all of these features would be potentially useful for classification, implying that human-imposed adversarial constraints, with their mismatch from the underlying geometry of the data distribution, lead to the presence of non-robust features that are nevertheless useful for classification. A similar observation was made in concurrent work by Ilyas et al. [39]. 5 Gaussian data: Sample complexity lower bound In this section, we use the characterization of the optimal loss in the Gaussian robust classification problem to establish the optimality of a rule for learning from a finite number of samples. This allows for precise characterization of sample complexity in the learning problem. Consider the following Bayesian learning problem, which generalizes a problem considered by Schmidt et al. [68]. We start from the classification problem defined in Section 4. There, the choice of the classifier h could directly depend on µ and Σ. Now we give µ the distribution N (0, 1mI). A learner who knows this prior but not the value of µ is provided with n i.i.d. labeled training examples samples. The learner selects any measurable classification function ĥn : Rd → {−1, 1} by applying some learning algorithm to the training data with the goal of minimizing E[L(N,P, ĥn)]. The optimal transport approach allows us to determine the exact optimal loss for this problem for each n as well as the optimal learning algorithm. To characterize this loss, we need the following definitions. Let A be the `2 unit ball: {y ∈ Rd : ‖y‖2 ≤ 1}. Let S(α, β) = {(x, t) ∈ Rd × R : x ∈ tαA+ βB}. Theorem 3. In the learning problem described above, the minimum loss of any learning rule is PrV∼N (0,I) [V ∈ S(ρ, βρ)], where ρ2 = m(m+n)n . The proof is in Section C of the Supplementary. The special case where B is an `∞ ball was considered by Schmidt et al. [68]. They obtained a lower bound on loss that can be expressed in our notation as Pr[V ∈ S(0, ρβ)]. This bound essentially ignores the random noise in the problem and computes the probability that after seeing n training examples, the posterior distributions for Xn+1|(Yn+1 = 1) and Xn+1|(Yn+1 = −1) are adversarially i distinguishable. The true optimal loss takes into account the intermediate case in which these posterior distributions are difficult but not impossible to distinguish in the presence of an adversary. Schmidt et al. investigate sample complexity in the following parameter regime: m = c1d 1 2 which by design is a low noise regime. In this regime, they establish upper and lower bounds on sample complexity of learning an adversarially robust classifier: C β 2d log d ≤ n ≤ C ′β2d. By taking into account the effect of the random noise, our characterization of the loss loses this gap. For larger values of m, the difference between Pr[Y ∈ S(0, ρβ)] and Pr[Y ∈ S(ρ, ρβ)] becomes more significant, so our analysis is useful over a much broader range of parameters. 6 Experimental Results In this section, we use Theorem 1 to find lower bounds on adversarial robustness for empirical datasets of interest. We also compare these bounds to the performance of robustly trained classifiers on adversarial examples and find a gap for larger perturbation values. For reproducibility purposes, our code is available at https://github.com/inspire-group/robustness-via-transport. 6.1 Experimental Setup We consider the adversarial classification problem on three widely used image datasets, namely MNIST [50], Fashion-MNIST [82] and CIFAR-10 [47], and obtain lower bounds on the adversarial robustness for any classifier for these datasets. For each dataset, we use data from classes 3 (PX1) and 7 (PX−1 ) to obtain a binary classification problem. This choice is arbitrary and similar results are obtained with other choices, which we omit for brevity. We use 2000 images from the training set of each class to compute the lower bound on adversarial robustness when the adversary is constrained using the `2 norm. For the `∞ norm, these pairs of classes are very well separated, making the lower bounds less interesting (results in Section D of the Supplementary). For the MNIST and Fashion MNIST dataset, we compare the lower bound with the performance of a 3-layer Convolutional Neural Network (CNN) that is robustly trained using iterative adversarial training [54] with the Adam optimizer [43] for 12 epochs. This network achieves 99.9% accuracy on the ‘3 vs. 7’ binary classification task on both MNIST and Fashion-MNIST. For the CIFAR-10 dataset, we use a ResNet-18 [36] trained for 200 epochs, which achieves 97% accuracy on the binary classification task. To generate adversarial examples both during the training process and to test robustness, we use Projected Gradient Descent (PGD) with an `2 constraint, random initialization and a minimum of 10 iterations. Since more powerful heuristic attacks may be possible against these robustly trained classifiers, the ‘robust classifier loss’ reported here is a lower bound. 6.2 Lower bounds on adversarial robustness for empirical distributions Now, we describe the steps we follow to obtain a lower bound on adversarial robustness for empirical distributions through a direct application of Theorem 1. We first create a k × k matrix D whose entries are ‖xi − xj‖p, where k is the number of samples from each class and p defines the norm. Now, we threshold these entries to obtain Dthresh, the matrix of adversarial costs (cN ◦ c>N )(xi, xj) (recall Section 3.2), whose (i, j)th entry is 1 if Dij > 2β and 0 otherwise, where β is the constraint on the adversary. Finally, optimal coupling cost (CN ◦ C>N )(PX1 , PX−1) is computed by performing minimum weight matching over the bipartite graph defined by the cost matrix Dthresh using the Linear Sum Assignment module from Scipy [41]. In Figure 4, we show the variation in the minimum possible 0− 1 loss (adversarial robustness) in the presence of an `2 constrained adversary as the attack budget β is increased. We compare this loss value to that of a robustly trained classifier [54] when the PGD attack is used (on the same data). Until a certain β value, robust training converges and the model attains a non-trivial adversarial robustness value. Nevertheless, there is a gap between the empirically obtained and theoretically predicted minimum loss values. Further, after β = 3.8 (MNIST), β = 4.8 (Fashion MNIST) and β = 1.5, we observe that robust training is unable to converge. We believe this occurs as a large fraction of the data at that value of β is close to the boundary when adversarially perturbed, making the classification problem very challenging. We note that in order to reduce the classification accuracy to random for CIFAR-10, a much larger `2 budget is needed compared to either MNIST or Fashion-MNIST, implying that the classes are better separated. 7 Related work and Concluding Remarks We only discuss the closest related work that analyzes evasion attacks theoretically. Extensive recent surveys [10, 51, 64] provide a broader overview. Distribution-specific generalization analysis: Schimdt et al. [68] studied the sample complexity of learning a mixture of Gaussians as well as Bernoulli distributed data in the presence of `∞-bounded adversaries, which we recover as a special case of our framework in 5. Gilmer et al. [33] and Diochnos et al. [26] analyzed the robustness of classifiers for specific distributions, i.e. points distributed on two concentric spheres and points on the Boolean hypercube respectively. In contrast to these papers, our framework applies for any binary classification problem as our lower bound applies to arbitrary distributions. Sample complexity in the PAC setting: Cullina et al. [23], Yin et al. [85] and Montasser et al. [56] derive the sample complexity needed to PAC-learn a hypothesis class in the presence of an evasion adversary. These approaches do not provide an analysis of the optimal loss under a given distribution, but only of the number of samples needed to get -close to it, i.e. to learn the best empirical hypothesis. Optimal transport for bounds on adversarial robustness: Sinha et al. [73] constrain the adversary using a Wasserstein distance bound on the distribution that results from perturbing the benign distribution and study the sample complexity of SGD for minimizing the relaxed Lagrangian formulation of the learning problem with this constraint. In contrast, we use a cost function that characterizes sample-wise adversarial perturbation exactly, which aligns with current practice and provide a lower bound on the 0− 1 loss with an adversary, while Sinha et al. minimize an upper bound to perform robust training. Mahloujifar et al. [55] and Dohmatob [27] use the ‘blowup’ property exhibited by certain data distributions to provide bounds on adversarial risk, given some level of ordinary risk. In comparison, our assumptions on the example space, distribution, and adversarial constraints are much milder. Even in regimes where these frameworks are applicable, our approach provides two key advantages. First, our bounds explicitly concern the adversarial robustness of the optimal classifier, while theirs relate the adversarial robustness to the benign classification error of a classifier. Thus, our bounds can still be nontrivial even when there is a classifier with a benign classification error of zero, which is exactly the case in our MNIST experiments. Second, our bounds apply for any adversarial budget while theirs become non-trivial only when the adversarial budget exceeds a critical threshold depending on the properties of the space. Possibility of robust classification: Bubeck et al. [13] show that there exist classification tasks in the statistical query model for which there is no efficient algorithm to learn robust classifiers. Tsipras et al. [79], Zhang et al. [87] and Suggala et al. [76] study the trade-offs between robustness and accuracy. We discuss this trade-off for Gaussian data in Section 4. 7.1 Concluding remarks Our framework provides lower bounds on adversarial robustness through the use of optimal transport for binary classification problems, which we apply to empirical datasets of interest to analyze the performance of current defenses. In future work, we will extend our framework to the multi-class classification setting. As a special case, we also characterize the learning problem exactly in the case of Gaussian data and study the relationship between noise in the learning problem and adversarial perturbations. Recent work [30, 32] has established an empirical connection between these two noise regimes and an interesting direction would be to precisely characterize which type of noise dominates the learning process for a given adversarial budget. Another natural next step would be to consider distributions beyond the Gaussian to derive expressions for optimal adversarial robustness as well as the sample complexity of attaining it. Acknowledgements We would like to thank Chawin Sitawarin for providing part of the code used in our experiments. This research was sponsored by the National Science Foundation under grants CNS-1553437, CNS1704105, CIF-1617286 and EARS-1642962, by Intel through the Intel Faculty Research Award, by the Office of Naval Research through the Young Investigator Program (YIP) Award, by the Army Research Office through the Young Investigator Program (YIP) Award and a Schmidt DataX Award. ANB would like to thank Siemens for supporting him through the FutureMakers Fellowship.
1. What are the minor and major points that the reviewer raises regarding the paper? 2. How does the reviewer suggest simplifying the proof of Theorem 1? 3. What is the issue with the "separability condition" in the proof of Theorem 1, and how does the reviewer propose to address it? 4. What is the implication of the reviewer's argument regarding the measure of the product space $\mathcal Z$? 5. How does the reviewer suggest modifying the proof of Theorem 1 to make it more rigorous?
Review
Review - It is worth mentioning that there has already being some theoretical work linking adversarial robustness to optimal transport. See "Generalized No Free Lunch Theorem for Adversarial Robustness", ICML 2019 - To be more rigorous, there are a few constraints which must be satisfied by f, g, and c in definition 2, for the ensuing arguments to be strictly valid. For example, f and g should be restricted to bounded continuous functions, etc. etc. - In the proof of Theorem 2, what is $\tilde{P}_{X_1}$ ? I checked over and over again, but this quantity was never defined in the manuscript. - In the proof of Theorem 2, what do you mean by "translate and pair in place" ? Please write out the form of such transportation plans explicitly. How do you get the upper bound in terms of TV distance ? Please provide all relevant details. - In inequality (1) of appendix B.2, what is \tilde{P}_{X_1} ? What is it's relation to minimization variable z ? How do you go from (2) to (3). The proof as it stands is at best not clear, and at worst incorrect. Too many underfined quantities, and unjustified transitions. - line 76: ... gives tilde{x} to the learner and ... - line 92: correctly, ==> correctly. - lines 224 -- 234: What is m ? m ==> d ? - line 177: missing verb "optimizing over plans us" ==> "optimizing over plans gives us" - In the equation just after line 196, the inf after the first inequality doesn't make sense (variable z is not used!) - What are X_1 and X_{-1} ? - Theorem 1 and the constructions leading to it could be greatly simplified by only considering "symmetric" neighborhoods N, i.e x' in N(x) iff x in N(x'). Also only considering spaces for which N(x) subseteq X for all x (which leads to tilde{X} = X), simplifies the whole thing, and is also a reasonable restriction since adversarial examples are only "genuine" if they are from the same space as clean ones - The Gaussian example considered in section 4 was already proposed in Tsipras'18 (ref. [74] ?). I hand-checked and the lower bound on adversarial error (see eq. 8 of supp. mat) proposed by the present authors matches what was obtained in that paper. The authors should check this, and perhaps add a paragraph in their paper making this point clear. These are all minor points, and should be easy to address. Major Edit: There is a Bug in proof of Theorem 1. ====================================== The condition under which Theorem 1 has been proven, namely the "separability condition", is too restrictive: it implies the joint distribution of $X$ and $Y$ is discrete! Indeed, I'll show that the set compliment $\mathcal Z\setminus (a_n)_n$ has zero measure, where $\mathcal Z = \mathcal X \times \{-1,1\}$. Indeed, $\mathcal Z \setminus (a_n)_n$ (assumed measurable!) contains no $a_n$, thus by the contrapositive of "separability condition", we must have $P(\mathcal Z \setminus (a_n)_n) = 0$. Thus the support of $P$ is a contained in the sequence $(a_n)_n$, and so the former must be a countable combination of atoms. In case my above measurability assumption (in brackets) is troublesome, one may remedy my argument as follows. Actually the point $\{a_n\}$ may be non-measurable. But we may denote by $c_n$ the infimum of measures of all measurable sets containing $a_n$, this infimum is realized (since the countable intersection of the sequence of minimizing sets is measurable itself), and the set $A_n$ which realizes it is an atom. In any case, the product spafce $\mathcal Z$ must be discrete! $\Box$ However (and fortunately), one can discard the troublesome "separability" condition and still prove that in Theorem 1, LHS >= RHS. This is anyways sufficient for obtaining lower bounds on universal error.
NIPS
Title Lower Bounds on Adversarial Robustness from Optimal Transport Abstract While progress has been made in understanding the robustness of machine learning classifiers to test-time adversaries (evasion attacks), fundamental questions remain unresolved. In this paper, we use optimal transport to characterize the minimum possible loss in an adversarial classification scenario. In this setting, an adversary receives a random labeled example from one of two classes, perturbs the example subject to a neighborhood constraint, and presents the modified example to the classifier. We define an appropriate cost function such that the minimum transportation cost between the distributions of the two classes determines the minimum 0− 1 loss for any classifier. When the classifier comes from a restricted hypothesis class, the optimal transportation cost provides a lower bound. We apply our framework to the case of Gaussian data with norm-bounded adversaries and explicitly show matching bounds for the classification and transport problems as well as the optimality of linear classifiers. We also characterize the sample complexity of learning in this setting, deriving and extending previously known results as a special case. Finally, we use our framework to study the gap between the optimal classification performance possible and that currently achieved by state-of-the-art robustly trained neural networks for datasets of interest, namely, MNIST, Fashion MNIST and CIFAR-10. 1 Introduction Machine learning (ML) has become ubiquitous due to its impressive performance in a wide variety of domains such as image recognition [48,72], natural language and speech processing [22,25,37], gameplaying [12,59,71] and aircraft collision avoidance [42]. This ubiquity, however, provides adversaries with both the opportunity and incentive to strategically fool machine learning systems during both the training (poisoning attacks) [5, 9, 40, 60, 67] and test (evasion attacks) [8, 17, 34, 57, 58, 63, 77] phases. In an evasion attack, an adversary adds imperceptible perturbations to inputs in the test phase to cause misclassification. A large number of adversarial example-based evasion attacks have been proposed against ML algorithms used for tasks such as image classification [8, 17, 19, 34, 63, 77], object detection [21, 53, 83], image segmentation [2, 31] and speech recognition [18, 86]; generative ∗Equal contribution. †Work done while at Princeton University 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. models for image data [45] and even reinforcement learning algorithms [38, 46]. These attacks have been carried out in black-box [7, 11, 20, 52, 61, 62, 77] as well as in physical settings [29, 49, 70, 74]. A wide variety of defenses based on adversarial training [34, 54, 78], input de-noising through transformations [6, 24, 28, 69, 84], distillation [65], ensembling [1, 4, 75] and feature nullification [81] were proposed to defend ML algorithms against evasion attacks, only for most to be rendered ineffective by stronger attacks [3, 14–16]. Iterative adversarial training [54] is a current state-of-theart empirical defense. Recently, defenses that rely on adversarial training and are provably robust to small perturbations have been proposed [35, 44, 66, 73] but are unable to achieve good generalization behavior on standard datasets such as CIFAR-10 [47]. In spite of an active line of research that has worked to characterize the difficulty of learning in the presence of evasion adversaries by analyzing the sample complexity of learning classifiers for known distributions [68] as well as in the distributionfree setting [23, 56, 85], fundamental questions remain unresolved. One such question is, what is the behavior of the optimal achievable loss in the presence of an adversary? In this paper, we derive bounds on the 0−1 loss of classifiers while classifying adversarially modified data at test time, which is often referred to as adversarial robustness. We first develop a framework that relates classification in the presence of an adversary and optimal transport with an appropriately defined adversarial cost function. For an arbitrary data distribution with two classes, we characterize optimal adversarial robustness in terms of the transportation distance between the classes. When the classifier comes from a restricted hypothesis class, we obtain a lower bound on the minimum possible 0− 1 loss (or equivalently, an upper bound on the maximum possible classification accuracy). We then consider the case of a mixture of two Gaussians and derive matching upper and lower bounds for adversarial robustness by framing it as a convex optimization problem and proving the optimality of linear classifiers. For an `∞ adversary, we also present the explicit solution for this optimization problem and analyze its properties. Further, we derive an expression for sample complexity with the assumption of a Gaussian prior on the mean of the Gaussians which allows us to independently match and extend the results from Schmidt et al. [68] as a special case. Finally, in our experiments, we find transportation costs between the classes of empirical distributions of interest such as MNIST [50], Fashion-MNIST [82] and CIFAR-10 [47] for adversaries bounded by `2 and `∞ distance constraints, and relate them to the classification loss of state-of-the-art robust classifiers. Our results demonstrate that as the adversarial budget increases, the gap between current robust classifiers and the lower bound increases. This effect is especially pronounced for the CIFAR10 dataset, providing a clear indication of the difficulty of robust classification for this dataset. What do these results imply? First, the effectiveness of any defense for a given dataset can be directly analyzed by comparing its robustness to the lower bound. In particular, this allows us to identify regimes of interest where robust classification is possible. Our bound can be used to decide whether a particular adversarial budget is big or small. Second, since our lower bound does not require any distributional assumptions on the data, we are able to directly apply it to empirical distributions, characterizing whether robust classification is possible. Further, in the Gaussian setting, the optimal classifier in the adversarial case depends explicitly on the adversary’s budget. The optimal classifier in the benign case (corresponding to a budget of 0), differs from that for non-zero budgets. This immediately establishes a trade-off between the benign accuracy and adversarial robustness achievable with a given classifier. This raises interesting questions about which classifier should actually be deployed and how large the trade-off is. From the explicit solution we derive in the Gaussian setting, we observe that non-robust features occur during classification due to a mismatch between the norms used by the adversary and that governing the data distribution. We expand upon this observation in Section 4.1, which was also made independently by Ilyas et al. [39]. Contributions: We summarize our contributions in this paper as follows: i) we develop a framework for finding general lower bounds for classification error in the presence of an adversary (adversarial robustness) using optimal transport, ii) we show matching upper and lower bounds for adversarial robustness as well as the sample complexity of attaining it for the case of Gaussian data and a convex, origin-symmetric constraint on the adversary and iii) we determine lower bounds on adversarial robustness for empirical datasets of interest and compare them to those of robustly trained classifiers. 2 Preliminaries and Notation In this section, we set up the problem of learning in the presence of an evasion adversary. Such an adversary presents the learner with adversarially modified examples at test time but does not interfere with the training process [17, 34, 77]. We also define notation for the rest of the paper and explain how other work on adversarial examples fits into our setting. We summarize the basic notation in Table 1. We now formally describe the learning problem. There is an unknown P ∈ P(X × {−1, 1}). The learner receives labeled training data (x,y) = ((x0, y0), . . . , (xn−1, yn−1)) ∼ Pn and must select a hypothesis h. The evasion adversary receives a labeled natural example (xTest, yTest) ∼ P and selects x̃ ∈ N(xTest), the set of adversarial examples in the neighborhood of xTest. The adversary gives x̃ to the learner and the learner must estimate yTest. Their performance is measured by the 0-1 loss, `(yTest, h(x̃)). Examples produced by the adversary are elements of a space X̃ . In most applications, X = X̃ , but we find it useful to distinguish them to clarify some definitions. We require N(x) to be nonempty so some choice of x̃ is always available. By taking X = X̃ and N(x) = {x}, we recover the standard problem of learning without an adversary. If N1, N2 are neighborhood functions and N1(x) ⊆ N2(x) for all x ∈ X , N2 represents a stronger adversary. When X = X̃ , a neighborhood function N can be defined using a distance d on X and an adversarial constraint β: N(x) = {x̃ : d(x, x̃) ≤ β}. This provides an ordered family of adversaries of varying strengths used in previous work [17, 34, 68]. The learner’s error rate under the data distribution P with an adversary constrained by the neighborhood function N is L(N,P, h) = E(x,y)∼P [maxx̃∈N(x) `(h(x̃), y)]. 3 Adversarial Robustness from Optimal transport In this section, we explain the connections between adversarially robust classification and optimal transport. At a high level, these arise from the following idea: if a pair of examples, one from each class, are adversarially indistinguishable, then any hypothesis can classify at most one of the examples correctly, By finding families of such pairs, one can obtain lower bounds on classification error rate. When the set of available hypotheses is as large as possible, the best of these lower bounds is tight. Section Roadmap: We will first review some basic concepts from optimal transport theory [80]. Then, we will define a cost function for adversarial classification as well as its associated potential functions that are needed to establish Kantorovich duality. We show how a coupling between the conditional distributions of the two classes can be obtained by composing couplings derived from the adversarial strategy and the total variation distance, which links hypothesis testing and transportation costs. Finally, we show that the potential functions have an interpretation in terms of classification, which leads to our theorem connecting adversarial robustness to the optimal transport cost. 3.1 Basic definitions from optimal transport In this section, we use capital letters for random variables and lowercase letters for points in spaces. Couplings A coupling between probability distributions PX on X and PY on Y is a joint distribution on X × Y with marginals PX and PY . Let Π(PX , PY ) be the set of such couplings. Definition 1 (Optimal transport cost). For a cost function c : X × Y → R ∪ {+∞} and marginal distributions PX and PY , the optimal transport cost is C(PX , PY ) = inf PXY ∈Π(PX ,PY ) E(X,Y )∼PXY [c(X,Y )]. (1) Potential functions and Kantorovich duality There is a dual characterization of optimal transport cost in terms of potential functions which we use to make the connection between the transport and classification problems. Definition 2 (Potential functions). Functions f : X → R and g : Y → R are potential functions for the cost c if g(y)− f(x) ≤ c(x, y) for all (x, y) ∈ X × Y . A pair of potential functions provide a one-dimensional representation of the spaces X and Y . This representation must be be faithful to the cost structure on the original spaces: if a pair of points (x, y) are close in transportation cost, then f(x) must be close to g(y). In the dual optimization problem for optimal transport cost, we search for a representation that separates PX from PY as much as possible: C(PX , PY ) = sup f,g EY∼PY [g(Y )]− EX∼PX [f(X)]. (2) For any choices of f , g, and PXY , it is clear that E[g(Y )]− E[f(X)] ≤ E[c(X,Y )]. Kantorovich duality states that there are in fact choices for f and g that attain equality. Define the dual of f relative to c to be f c(y) = infx c(x, y) + f(x). This is the largest function that forms a potential for c when paired with with f . In (2), it is sufficient to optimize over pairs (f, f c). Compositions The composition of cost functions c : X × Y → R and c′ : Y × Z → R is (c ◦ c′) : X × Z → R (c ◦ c′)(x, z) = inf y∈Y c(x, y) + c′(y, z). The composition of optimal transport costs can be defined in two equivalent ways: (C ◦ C ′)(PX , PZ) = inf PY C(PX , PY ) + C ′(PY , PZ) = inf PXZ E[(c ◦ c′)(X,Z)] Total variation distance The total variation distance between distributions P and Q is CTV(P,Q) = sup A P (A)−Q(A). (3) We use this notation because it is the optimal transport cost for the cost function cTV : X × X → R, cTV(x, x ′) = 1[x 6= x′]. Observe that (3) is equivalent to (2) with the additional restrictions that f(x) ∈ {0, 1} for all x, i.e. f is an indicator function for some set A and g = f cTV . For binary classification with a symmetric prior on the classes, a set A that achieves the optimum in Eq. (3) corresponds to an optimal test for distinguishing P from Q. 3.2 Adversarial cost functions and couplings We now construct specialized version of costs and couplings that translate between robust classification and optimal transport. Cost functions for adversarial classification The adversarial constraint information N can be encoded into the following cost function cN : X × X̃ → R: cN (x, x̃) = 1[x̃ 6∈ N(x)]. The composition of cN and c>N (i.e. cN with the arguments flipped) has simple combinatorial interpretation: (cN ◦ c>N )(x, x′) = 1[N(x) ∩N(x′) = ∅]. Perhaps the most well-known example of optimal transport is the earth-mover’s or 1-Wasserstein distance, where the cost function is a metric on the underlying space. In general, the transportation cost cN ◦ c>N is not a metric on X because (cN ◦ c>N )(x, x′) = 0 does not necessarily imply x = x′. However, when (cN ◦ c>N )(x, x′) = 0, we say that the points are adversarially indistinguishible. Couplings from adversarial strategies Let a : X → X̃ be a function such that a(x) ∈ N(x) for all x ∈ X . Then a is an admissible adversarial perturbation strategy. The adversarial expected risk can be expressed as a maximization over adversarial strategies: L(N,P, h) = supa1,a−1 E(x,c)∼P [`(h(ac(x)), c)]. Let X̃1 = a1(X1), so a1 gives a coupling PX1X̃1 between PX1 and PX̃1 . By construction, CN (PX1 , PX̃1) = 0. A general coupling between PX1 and PX̃1 with CN (PX1 , PX̃1) = 0 corresponds to a randomized adversarial strategy. We define PX̃−1 and PX−1X̃−1 analogously. By composing the adversarial strategy coupling PX1X̃1 , the total variation coupling of PX̃1 and PX̃−1 , and PX̃−1X−1 , we obtain a coupling PX1X−1 . Potential functions from classifiers Now we can explore the relationship between transport and classification. Consider a given hypothesis h : X̃ → {−1, 1}. A labeled adversarial example (x̃, y) is classified correctly if x̃ ∈ h−1(y). A labeled example (x, y) is classified correctly if N(x) ⊆ h−1(y). Following Cullina et al. [23], we define degraded hypotheses h̃ : X → {−1, 1,⊥}, h̃(x) = { y : N(x) ⊆ h−1(y) ⊥ : otherwise. This allows us to express the adversarial classification accuracy of h, 1− L(N,h, P ), as 1 2 (E[1[h̃(X1) = 1]] + E[1[h̃(X−1) = −1]]). Observe that 1[h̃(x) = 1] + 1[h̃(x′) = −1] ≤ (cN ◦ c>N )(x, x′) + 1. Thus the functions f(x) = 1 − 1[h̃(x) = 1] and g(x) = 1[h̃(x) = −1] are admissible potentials for cN ◦ c>N . This is illustrated in Figure 1. Our first theorem characterizes optimal adversarial robustness when h is allowed to be any classifier. Theorem 1. Let X and X̃ be Polish spaces and let N : X → 2X̃ be an upper-hemicontinuous neighborhood function such that N(x) is nonempty and closed for all x. For any pair of distributions PX1 ,PX−1 on X , (CN ◦ C>N )(PX1 , PX−1) = 1− 2 inf h L(N,h, P ) where h : X̃ → {1,−1} can be any measurable function. Furthermore there is some h that achieves the infimum. In the case of finite spaces, this theorem is essentially equivalent to the König-Egerváry theorem on size of a maximum matching in a bipartite graph. The full proof is in Section A of the Supplementary. If instead of all measurable functions, we consider h ∈ H, a smaller hypothesis class, Theorem 1 provides a lower bound on infh∈H L(N,h, P ). 4 Gaussian data: Optimal loss In this section, we consider the case when the data is generated from a mixture of two Gaussians with identical covariances and means that differ in sign. Directly applying (1) or (2), requires optimizing over either all classifiers or all transportation plans. However, a classifier and a coupling that achieve the same cost must both be optimal. We use this to show that optimizing over linear classifiers and ‘translate and pair’ transportation plans characterizes adversarial robustness in this case. Problem setup: Consider a labeled example (X,Y ) ∈ Rd × {−1, 1} such that the example X has a Gaussian conditional distribution, X|(Y = y) ∼ N (yµ,Σ), and Pr(Y = 1) = Pr(Y = −1) = 12 . Let B ⊆ Rd be a closed, convex, absorbing, origin-symmetric set. The adversary is constrained to add perturbations to a data point x contained within βB, where β is an adversarial budget parameter. That is, for all x, N(x) = x + βB. This includes `p-constrained adversaries as the special case B = {z : ‖z‖p ≤ 1}. For N and P of this form, we will determine infh L(N,P, h) where h can be any measurable function. We first define the following convex optimization problem in order to state Theorem 2. In the proof of Theorem 2, it will become clear how it arises. Definition 3. Let α∗(β, µ) be the solution to the following convex optimization problem: (z, y, α) ∈ Rd+d+1 minα s.t. ‖y‖Σ ≤ α ‖z‖B ≤ β z + y = µ (4) where we use the seminorms ‖y‖Σ = √ y>Σ−1y and ‖z‖B = inf{β : z ∈ βB}. Theorem 2. Let N(x) = x + βB. Then (CN ◦ C>N )(N (µ,Σ),N (−µ,Σ)) = 1 − 2Q(α∗(β, µ)), where Q is the complementary cumulative distribution function for N (0, 1). The crucial properties of the solution to (4) are characterized in the following lemma. Lemma 1. Let µ ∈ Rd, β ≥ 0, and α = α∗(β, x). There are y, z, w ∈ Rd such that y + z = µ and ‖y‖Σ = α ‖z‖B = β ‖w‖Σ∗ = 1 ‖w‖B∗ = γ w>y = α w>z = βγ. The proof of Lemma 1 is in Section B.1 of the Supplementary. Proof of Theorem 2. We start from the definition of optimal transport cost and consider the restricted class of “translate and pair in place” couplings to get an upper bound. In these couplings, the adversarial attacks are translations by a constant: X̃1 = X1 + z and X̃−1 = X−1 − z. The total variation coupling between X̃1 and X̃−1 does “pairing in place”. (CN ◦ C>N )(PX1 , PX−1) ≤ inf z∈βB CTV (PX̃1 , PX̃−1) = infz∈βB sup w 2Q ( wᵀz − wᵀµ√ wᵀΣw ) − 1. The full computation of the total variation between Gaussians is in Section B.2 of the Supplementary.. The infimum is attained at w∗ = 2Σ−1(z − µ) and its value is √ (z − µ)ᵀΣ−1(z − µ). The choice of z from Lemma 1 makes the upper bound 2Q(−α∗(β, µ))− 1 = 1− 2Q(α∗(β, µ)). Now we consider the lower bounds on optimal transport cost from linear classification functions of the form fw(x) = sgn (wᵀx). In the presence of an adversary, the classification problem becomes maxw P(x,y)∼P [fw(x+ aw,y(x)) = y] . When y = 1, the correct classification event is fw(x+ aw,1(x)) = 1, or equivalently wᵀx− β‖w‖B∗ > 0. This ultimately gives the lower bound (CN ◦ C>N )(PX1 , PX−1) ≥ sup w 1− 2Q ( β‖w‖B∗ − wᵀµ ‖w‖Σ∗ ) . (5) The full calculation appears in the supplementary material (Section B.3). From Lemma 1, there is a choice of w that makes the bound in (5) equal to 1− 2Q(α∗(β, µ)). The proof of Theorem 2 shows that linear classifiers are optimal for this problem. The choice of w provided by Lemma 1 specifies the orientation of the optimal classifier. 4.1 Special cases Matching norms for data and adversary: When B is the unit ball derived from Σ, the optimization problem (4) has a very simple solution: α∗(β, µ) = ‖µ‖Σ−β, y = αµ, z = βµ, andw = 1‖µ‖Σ Σ −1µ. Thus, the same classifier is optimal for all adversarial budgets. In general, α∗(0, µ) = ‖µ‖Σ and α∗(‖µ‖B, µ) = 0, but α∗(β, µ) can be nontrivially convex for 0 ≤ β ≤ ‖µ‖B. When there is a difference between the two seminorms, the optimal modification is not proportional to µ, which can be used by the adversary. The optimal classifier varies with the adversarial budget, so there is a trade-off between accuracy and robust accuracy. `∞ adversaries: In Figure 2, we illustrate this phenomenon for an `∞ adversary. We plot α(β, µ) for Σ = I (so ‖ · ‖Σ = ‖ · ‖2) and taking B to be the `∞ unit ball (so ‖ · ‖B = ‖ · ‖∞). In this case (4) has an explicit solution. For each coordinate zi, set zi = min(µi, β), which gives yi = µi −min(µi, β), which makes the constraints tight. Thus, as β increases, more components of z equal those of µ, reducing the marginal effect of an additional increase in β. Due to the mismatch between the seminorms governing the data and adversary, the value of β determines which features are useful for classification, since features less than β can be completely erased. Without an adversary, all of these features would be potentially useful for classification, implying that human-imposed adversarial constraints, with their mismatch from the underlying geometry of the data distribution, lead to the presence of non-robust features that are nevertheless useful for classification. A similar observation was made in concurrent work by Ilyas et al. [39]. 5 Gaussian data: Sample complexity lower bound In this section, we use the characterization of the optimal loss in the Gaussian robust classification problem to establish the optimality of a rule for learning from a finite number of samples. This allows for precise characterization of sample complexity in the learning problem. Consider the following Bayesian learning problem, which generalizes a problem considered by Schmidt et al. [68]. We start from the classification problem defined in Section 4. There, the choice of the classifier h could directly depend on µ and Σ. Now we give µ the distribution N (0, 1mI). A learner who knows this prior but not the value of µ is provided with n i.i.d. labeled training examples samples. The learner selects any measurable classification function ĥn : Rd → {−1, 1} by applying some learning algorithm to the training data with the goal of minimizing E[L(N,P, ĥn)]. The optimal transport approach allows us to determine the exact optimal loss for this problem for each n as well as the optimal learning algorithm. To characterize this loss, we need the following definitions. Let A be the `2 unit ball: {y ∈ Rd : ‖y‖2 ≤ 1}. Let S(α, β) = {(x, t) ∈ Rd × R : x ∈ tαA+ βB}. Theorem 3. In the learning problem described above, the minimum loss of any learning rule is PrV∼N (0,I) [V ∈ S(ρ, βρ)], where ρ2 = m(m+n)n . The proof is in Section C of the Supplementary. The special case where B is an `∞ ball was considered by Schmidt et al. [68]. They obtained a lower bound on loss that can be expressed in our notation as Pr[V ∈ S(0, ρβ)]. This bound essentially ignores the random noise in the problem and computes the probability that after seeing n training examples, the posterior distributions for Xn+1|(Yn+1 = 1) and Xn+1|(Yn+1 = −1) are adversarially i distinguishable. The true optimal loss takes into account the intermediate case in which these posterior distributions are difficult but not impossible to distinguish in the presence of an adversary. Schmidt et al. investigate sample complexity in the following parameter regime: m = c1d 1 2 which by design is a low noise regime. In this regime, they establish upper and lower bounds on sample complexity of learning an adversarially robust classifier: C β 2d log d ≤ n ≤ C ′β2d. By taking into account the effect of the random noise, our characterization of the loss loses this gap. For larger values of m, the difference between Pr[Y ∈ S(0, ρβ)] and Pr[Y ∈ S(ρ, ρβ)] becomes more significant, so our analysis is useful over a much broader range of parameters. 6 Experimental Results In this section, we use Theorem 1 to find lower bounds on adversarial robustness for empirical datasets of interest. We also compare these bounds to the performance of robustly trained classifiers on adversarial examples and find a gap for larger perturbation values. For reproducibility purposes, our code is available at https://github.com/inspire-group/robustness-via-transport. 6.1 Experimental Setup We consider the adversarial classification problem on three widely used image datasets, namely MNIST [50], Fashion-MNIST [82] and CIFAR-10 [47], and obtain lower bounds on the adversarial robustness for any classifier for these datasets. For each dataset, we use data from classes 3 (PX1) and 7 (PX−1 ) to obtain a binary classification problem. This choice is arbitrary and similar results are obtained with other choices, which we omit for brevity. We use 2000 images from the training set of each class to compute the lower bound on adversarial robustness when the adversary is constrained using the `2 norm. For the `∞ norm, these pairs of classes are very well separated, making the lower bounds less interesting (results in Section D of the Supplementary). For the MNIST and Fashion MNIST dataset, we compare the lower bound with the performance of a 3-layer Convolutional Neural Network (CNN) that is robustly trained using iterative adversarial training [54] with the Adam optimizer [43] for 12 epochs. This network achieves 99.9% accuracy on the ‘3 vs. 7’ binary classification task on both MNIST and Fashion-MNIST. For the CIFAR-10 dataset, we use a ResNet-18 [36] trained for 200 epochs, which achieves 97% accuracy on the binary classification task. To generate adversarial examples both during the training process and to test robustness, we use Projected Gradient Descent (PGD) with an `2 constraint, random initialization and a minimum of 10 iterations. Since more powerful heuristic attacks may be possible against these robustly trained classifiers, the ‘robust classifier loss’ reported here is a lower bound. 6.2 Lower bounds on adversarial robustness for empirical distributions Now, we describe the steps we follow to obtain a lower bound on adversarial robustness for empirical distributions through a direct application of Theorem 1. We first create a k × k matrix D whose entries are ‖xi − xj‖p, where k is the number of samples from each class and p defines the norm. Now, we threshold these entries to obtain Dthresh, the matrix of adversarial costs (cN ◦ c>N )(xi, xj) (recall Section 3.2), whose (i, j)th entry is 1 if Dij > 2β and 0 otherwise, where β is the constraint on the adversary. Finally, optimal coupling cost (CN ◦ C>N )(PX1 , PX−1) is computed by performing minimum weight matching over the bipartite graph defined by the cost matrix Dthresh using the Linear Sum Assignment module from Scipy [41]. In Figure 4, we show the variation in the minimum possible 0− 1 loss (adversarial robustness) in the presence of an `2 constrained adversary as the attack budget β is increased. We compare this loss value to that of a robustly trained classifier [54] when the PGD attack is used (on the same data). Until a certain β value, robust training converges and the model attains a non-trivial adversarial robustness value. Nevertheless, there is a gap between the empirically obtained and theoretically predicted minimum loss values. Further, after β = 3.8 (MNIST), β = 4.8 (Fashion MNIST) and β = 1.5, we observe that robust training is unable to converge. We believe this occurs as a large fraction of the data at that value of β is close to the boundary when adversarially perturbed, making the classification problem very challenging. We note that in order to reduce the classification accuracy to random for CIFAR-10, a much larger `2 budget is needed compared to either MNIST or Fashion-MNIST, implying that the classes are better separated. 7 Related work and Concluding Remarks We only discuss the closest related work that analyzes evasion attacks theoretically. Extensive recent surveys [10, 51, 64] provide a broader overview. Distribution-specific generalization analysis: Schimdt et al. [68] studied the sample complexity of learning a mixture of Gaussians as well as Bernoulli distributed data in the presence of `∞-bounded adversaries, which we recover as a special case of our framework in 5. Gilmer et al. [33] and Diochnos et al. [26] analyzed the robustness of classifiers for specific distributions, i.e. points distributed on two concentric spheres and points on the Boolean hypercube respectively. In contrast to these papers, our framework applies for any binary classification problem as our lower bound applies to arbitrary distributions. Sample complexity in the PAC setting: Cullina et al. [23], Yin et al. [85] and Montasser et al. [56] derive the sample complexity needed to PAC-learn a hypothesis class in the presence of an evasion adversary. These approaches do not provide an analysis of the optimal loss under a given distribution, but only of the number of samples needed to get -close to it, i.e. to learn the best empirical hypothesis. Optimal transport for bounds on adversarial robustness: Sinha et al. [73] constrain the adversary using a Wasserstein distance bound on the distribution that results from perturbing the benign distribution and study the sample complexity of SGD for minimizing the relaxed Lagrangian formulation of the learning problem with this constraint. In contrast, we use a cost function that characterizes sample-wise adversarial perturbation exactly, which aligns with current practice and provide a lower bound on the 0− 1 loss with an adversary, while Sinha et al. minimize an upper bound to perform robust training. Mahloujifar et al. [55] and Dohmatob [27] use the ‘blowup’ property exhibited by certain data distributions to provide bounds on adversarial risk, given some level of ordinary risk. In comparison, our assumptions on the example space, distribution, and adversarial constraints are much milder. Even in regimes where these frameworks are applicable, our approach provides two key advantages. First, our bounds explicitly concern the adversarial robustness of the optimal classifier, while theirs relate the adversarial robustness to the benign classification error of a classifier. Thus, our bounds can still be nontrivial even when there is a classifier with a benign classification error of zero, which is exactly the case in our MNIST experiments. Second, our bounds apply for any adversarial budget while theirs become non-trivial only when the adversarial budget exceeds a critical threshold depending on the properties of the space. Possibility of robust classification: Bubeck et al. [13] show that there exist classification tasks in the statistical query model for which there is no efficient algorithm to learn robust classifiers. Tsipras et al. [79], Zhang et al. [87] and Suggala et al. [76] study the trade-offs between robustness and accuracy. We discuss this trade-off for Gaussian data in Section 4. 7.1 Concluding remarks Our framework provides lower bounds on adversarial robustness through the use of optimal transport for binary classification problems, which we apply to empirical datasets of interest to analyze the performance of current defenses. In future work, we will extend our framework to the multi-class classification setting. As a special case, we also characterize the learning problem exactly in the case of Gaussian data and study the relationship between noise in the learning problem and adversarial perturbations. Recent work [30, 32] has established an empirical connection between these two noise regimes and an interesting direction would be to precisely characterize which type of noise dominates the learning process for a given adversarial budget. Another natural next step would be to consider distributions beyond the Gaussian to derive expressions for optimal adversarial robustness as well as the sample complexity of attaining it. Acknowledgements We would like to thank Chawin Sitawarin for providing part of the code used in our experiments. This research was sponsored by the National Science Foundation under grants CNS-1553437, CNS1704105, CIF-1617286 and EARS-1642962, by Intel through the Intel Faculty Research Award, by the Office of Naval Research through the Young Investigator Program (YIP) Award, by the Army Research Office through the Young Investigator Program (YIP) Award and a Schmidt DataX Award. ANB would like to thank Siemens for supporting him through the FutureMakers Fellowship.
1. How does the paper approach computing a theoretical lower bound for robust accuracy in binary classification? 2. What is the significance of using a distance metric between two classes in the paper's approach? 3. How does the paper's method compare to other approaches to looking at the problem of robust accuracy? 4. How could the framework presented in the paper be extended to multiclass classification? 5. How would the method's scalability be affected when applied to a larger number of classes? 6. Why are robust classifier losses not shown in the CIFAR plots, and how would including them impact the visual representation of the data? 7. Can the training issues above certain beta in Figure 3 be resolved by choosing a more expressive model with appropriate parameters? 8. Is there an error in the proof of Theorem 3, specifically regarding the notation used for the probability measure?
Review
Review General comment: This paper provides a nice principled way to compute a theoretical lower bound for robust accuracy based on a distance metric between two classes in binary classification. This makes rigorous the intuition that the more separated (in the sense of the perturbation neighborhood the attacker can search in), the lower the optimal robust loss should be - bounded below by the optimal standard loss. Although I believe this type of result could have also been shown outside the optimal transport framework with perhaps less notation, it is one justified way to look at the problem. The paper is written in a coherent manner, the theory is clear as are the experiments. I am not very aware of similar work - if indeed it is the first to prove lower bounds for adversarial robustness using a distributional distance metric I would highly recommended this paper for publication at Neurips. Here are some questions that remain, from high to low level: - How could this framework be extended to multiclass classification? And how would the method scale with the number of classes? - For the CIFAR plots you should add robust classifier losses as well even though I understand they might not look that great, that is what we have at the moment! - For Figure 3, aparently there are training issues above certain beta. What is the adversarial training accuracy in those cases? I suspect that it is below 100%. In this case, choosing a sufficiently expressive model with the right parameters should mitigate this issue? - In proof of theorem 3: It should read Pr(V \in ...) on the RHS.
NIPS
Title Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization Abstract The problem of inverse reinforcement learning (IRL) is relevant to a variety of tasks including value alignment and robot learning from demonstration. Despite significant algorithmic contributions in recent years, IRL remains an ill-posed problem at its core; multiple reward functions coincide with the observed behavior and the actual reward function is not identifiable without prior knowledge or supplementary information. This paper presents an IRL framework called Bayesian optimization-IRL (BO-IRL) which identifies multiple solutions that are consistent with the expert demonstrations by efficiently exploring the reward function space. BO-IRL achieves this by utilizing Bayesian Optimization along with our newly proposed kernel that (a) projects the parameters of policy invariant reward functions to a single point in a latent space and (b) ensures nearby points in the latent space correspond to reward functions yielding similar likelihoods. This projection allows the use of standard stationary kernels in the latent space to capture the correlations present across the reward function space. Empirical results on synthetic and realworld environments (model-free and model-based) show that BO-IRL discovers multiple reward functions while minimizing the number of expensive exact policy optimizations. 1 Introduction Inverse reinforcement learning (IRL) is the problem of inferring the reward function of a reinforcement learning (RL) agent from its observed behavior [1]. Despite wide-spread application (e.g., [1, 4, 5, 27]), IRL remains a challenging problem. A key difficulty is that IRL is ill-posed; typically, there exist many solutions (reward functions) for which a given behavior is optimal [2, 3, 29] and it is not possible to infer the true reward function from among these alternatives without additional information, such as prior knowledge or more informative demonstrations [9, 15]. Given the ill-posed nature of IRL, we adopt the perspective that an IRL algorithm should characterize the space of solutions rather than output a single answer. Indeed, there is often no one correct solution. Although this approach differs from traditional gradient-based IRL methods [38] and modern deep incarnations that converge to specific solutions in the reward function space (e.g., [12, 14]), it is not entirely unconventional. Previous approaches, notably Bayesian IRL (BIRL) [32], share this view and return a posterior distribution over possible reward functions. However, BIRL and other similar methods [25] are computationally expensive (often due to exact policy optimization steps) or suffer from issues such as overfitting [8]. In this paper, we pursue a novel approach to IRL by using Bayesian optimization (BO) [26] to minimize the negative log-likelihood (NLL) of the expert demonstrations with respect to reward functions. BO is specifically designed for optimizing expensive functions by strategically picking inputs to evaluate and appears to be a natural fit for this task. In addition to the samples procured, the Gaussian process (GP) regression used in BO returns additional information about the discovered 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. reward functions in the form of a GP posterior. Uncertainty estimates of the NLL for each reward function enable downstream analysis and existing methods such as active learning [23] and active teaching [9] can be used to further narrow down these solutions. Given the benefits above, it may appear surprising that BO has not yet been applied to IRL, considering its application to many different domains [35]. A possible reason may be that BO does not work “out-of-the-box” for IRL despite its apparent suitability. Indeed, our initial naïve application of BO to IRL failed to produce good results. Further investigation revealed that standard kernels were unsuitable for representing the covariance structure in the space of reward functions. In particular, they ignore policy invariance [3] where a reward function maintains its optimal policy under certain operations such as linear translation. Leveraging on this insight, we contribute a novel ρ-projection that remedies this problem. Briefly, the ρ-projection maps policy invariant reward functions to a single point in a new representation space where nearby points share similar NLL; Fig. 1 illustrates this key idea on a Gridworld environment.1 With the ρ-projection in hand, standard stationary kernels (such as the popular RBF) can be applied in a straightforward manner. We provide theoretical support for this property and experiments on a variety of environments (both discrete and continuous, with model-based and model-free settings) show that our BO-IRL algorithm (with ρ-projection) efficiently captures the correlation structure of the reward space and outperforms representative state-of-the-art methods. 2 Preliminaries and Background Markov Decision Process (MDP). An MDP is defined by a tupleM : 〈S,A,P,R, γ〉 where S is a finite set of states, A is a finite set of actions, P(s′|s, a) is the conditional probability of next state s′ given current state s and action a, R : S ×A× S → R denotes the reward function, and γ ∈ (0, 1) is the discount factor. An optimal policy π∗ is a policy that maximizes the expected sum of discounted rewards E [ ∑∞ t=0 γ tR(st, at, st+1)|π,M]. The task of finding an optimal policy is referred to as policy optimization. If the MDP is fully known, then policy optimization can be performed via dynamic programming. In model-free settings, RL algorithms such as proximal policy optimization [34] can be used to obtain a policy. Inverse Reinforcement Learning (IRL). Often, it is difficult to manually specify or engineer a reward function. Instead, it may be beneficial to learn it from experts. The problem of inferring the unknown reward function from a set of (near) optimal demonstrations is known as IRL. The learner is 1This Gridworld environment will be our running example throughout this paper. provided with an MDP without a reward function,M\R, and a set T , {τi}Ni=1 of N trajectories. Each trajectory τ , {(st, at)}L−1t=0 is of length L. Similar to prior work, we assume that the reward function can be represented by a real vector θ ∈ Θ ⊆ Rd and is denoted by Rθ(s, a, s′). Overloading our notation, we denote the discounted reward of a trajectory τ as Rθ(τ) , ∑L−1 t=0 γ tRθ(st, at, st+1). In the maximum entropy framework [38], the probability pθ(τ) of a given trajectory is related to its discounted reward as follows: pθ(τ) = exp(Rθ(τ))/Z(θ) (1) where Z(θ) is the partition function that is intractable in most practical scenarios. The optimal parameter θ∗ is given by argminθ LIRL(θ) where LIRL(θ) , − ∑ τ∈T L−2∑ t=0 [log(π∗θ(st, at)) + log(P(st+1|st, at))] (2) is the negative log-likelihood (NLL) and π∗θ is the optimal policy computed using Rθ. 3 Bayesian Optimization-Inverse Reinforcement Learning (BO-IRL) Recall that IRL algorithms take as input an MDPM\R, a space Θ of reward function parameters, and a set T of N expert demonstrations. We follow the maximum entropy framework where the optimal parameter θ∗ is given by argminθ LIRL(θ) and LIRL(θ) takes the form shown in (2). Unfortunately, calculating π∗θ in (2) is expensive, which renders exhaustive exploration of the reward function space infeasible. To mitigate this expense, we propose to leverage Bayesian optimization (BO) [26]. Bayesian optimization is a general sequential strategy for finding a global optimum of an expensive black-box function f : X → R defined on some bounded set X ∈ Rd. In each iteration t = 1, . . . , T , an input query xt ∈ X is selected to evaluate the value of f yielding a noisy output yt , f(xt) + where ∼ N (0, σ2) is i.i.d. Gaussian noise with variance σ2. Since evaluation of f is expensive, a surrogate model is used to strategically select input queries to approach the global minimizer x∗ = argminx∈X f(x). The candidate xt is typically found by maximizing an acquisition function. In this work, we use a Gaussian process (GP) [36] as the surrogate model and expected improvement (EI) [26] as our acquisition function. Gaussian process (GP). A GP is a collection of random variables {f(x)}x∈X where every finite subset follows a multivariate Gaussian distribution. A GP is fully specified by its prior mean µ(x) and covariance k(x,x′) for all x,x′ ∈ X . In typical settings, µ(x) is often set to zero and the kernel function k(x,x′) is the primary ingredient. Given a column vector yT , [yt] > t=1..T of noisy observations of f at inputs x1, . . . ,xT obtained after T evaluations, a GP permits efficient computation of its posterior for any input x. The GP posterior is a Gaussian with posterior mean and variance µT (x) , kT (x) > + (KT + σ 2I)−1yT σ2T (x) , k(x,x)− kT (x) >(KT + σ 2I)−1kT (x) (3) where K , [k(xt,xt′)]t,t′=1,...,T is the kernel matrix and k(x) , [k(xt,x)] > t=1,...,T is the vector of cross-covariances between x and xt. Expected Improvement (EI). EI attempts to find a new candidate input xt at iteration t that maximizes the expected improvement over the best value seen thus far. Given the current GP posterior and xbest , argmaxx∈{x1,...,xt−1} f(x), the next xt is found by maximizing aEI(x) , σt−1(x)[γt−1(x)Φ(γt−1(x)) +N (γt−1(x); 0, 1)] (4) where Φ(x) is the cumulative distribution function of the standard Gaussian and γt(x) , (f(xbest − µt(x))/σt(x) is a Z-score. Specializing BO for IRL. To apply BO to IRL, we set the function f to be the IRL loss, i.e., f(θ) = LIRL(θ), and specify the kernel function k(θ,θ′) in the GP. The latter is a crucial choice; since the kernel encodes the prior covariance structure across the reward parameter space, its specification can have a dramatic impact on search performance. Unfortunately, as we will demonstrate, popular stationary kernels are generally unsuitable for IRL. The remainder of this section details this issue and how we can remedy it via a specially-designed projection. 3.1 Limitations of Standard Stationary Kernels: An Illustrative Example As a first attempt to optimize LIRL using BO, one may opt to parameterize the GP surrogate function with standard stationary kernels, which are functions of θ−θ′. For example, the radial basis function (RBF) kernel is given by kRBF(θ,θ ′) = exp(−‖θ − θ′‖2/2l2) (5) where the lengthscale l captures how far one can reliably extrapolate from a given data point. While simple and popular, the RBF is a poor choice for capturing covariance structure in the reward parameter space. To elaborate, the RBF kernel encodes the notion that reward parameters which are closer together (in terms of squared Euclidean distance) have similar LIRL values. However, this structure does not generally hold true in an IRL setting due to policy invariance; in our Gridworld example, LIRL(θa) is the same as LIRL(θb) despite θa and θb being far apart (see Fig. 1b). Indeed, Fig. 2b illustrates that applying BO with the RBF kernel yields a poor GP posterior approximation to the true NLLs. The same effect can be seen for the Matérn kernel in Fig. 2c. 3.2 Addressing Policy Invariance with the ρ-Projection The key insight of this work is that better exploration can be achieved via an alternative representation of reward functions that mitigates policy invariance associated with IRL [3]. Specifically, we develop the ρ-projection whose key properties are that (a) policy invariant reward functions are mapped to a single point and (b) points that are close in its range correspond to reward functions with similar LIRL. Effectively, the ρ-projection maps reward function parameters into a space where standard stationary kernels are able to capture the covariance between reward functions. For expositional simplicity, let us first consider the special case where we have only one expert demonstration. Definition 1 Consider an MDPM with reward Rθ and a single expert trajectory τ . Let F(τ) be a set of M uniformly sampled trajectories fromM with the same starting state and length as τ . Define the ρ-projection ρτ : Θ→ R as ρτ (θ) , pθ(τ) pθ(τ) + ∑ τ ′∈F(τ) pθ(τ ′) = exp(Rθ(τ)/Z(θ)) exp(Rθ(τ)/Z(θ)) + ∑ τ ′∈F(τ) exp(Rθ(τ ′)/Z(θ)) = exp(Rθ(τ)) exp(Rθ(τ)) + ∑ τ ′∈F(τ) exp(Rθ(τ ′)) . (6) The first equality in (6) is a direct consequence of the assumption that the distribution of trajectories in MDPM follows (1) from the maximum entropy IRL framework. It can be seen from the second equality in (6) that an appealing property of ρ-projection is that the partition function is canceled off from the numerator and denominator, thereby eliminating the need to approximate it. Note that the ρ-projection is not an approximation of p(τ) despite the similar forms. F(τ) in the denominator of ρ-projection is sampled to have the same starting point and length as τ ; as such, it may not cover the space of all trajectories and hence does not approximate Z(θ) even with large M . We will discuss below how the ρ-projection achieves the aforementioned properties. Policy invariance can occur due to multiple causes and we begin our discussion with a common class of policy invariant reward functions, namely, those resulting from potential-based reward shaping (PBRS) [28]. ρ-Projection of PBRS-Based Policy Invariant Reward Functions. Reward shaping is a method used to augment the reward function with additional information (referred to as a shaping function) without changing its optimal policy [24]. Designing a reward shaping function can be thought of as the inverse problem of identifying the underlying cause of policy invariance. Potential-based reward shaping (PBRS) [28] is a popular shaping function that provides theoretical guarantees for single-objective single-agent domains. We summarize the main theoretical result from [28] below: Theorem 1 Consider an MDPM0 : 〈S,A, T, γ,R0〉. We define PBRS F : S ×A× S → R to be a function of the form F (s, a, s′) , γφ(s′)− φ(s) where φ(s) is any function of the form φ : S → R. Then, for all s, s′ ∈ S and a ∈ A, the following transformation fromR0 toR is sufficient to guarantee that every optimal policy inM0 is also optimal in MDPM : 〈S,A, T, γ,R〉: R(s, a, s′) , R0(s, a, s ′) + F (s, a, s′) = R0(s, a, s ′) + γφ(s′)− φ(s) . (7) Remark 1 The work of [28] has proven Theorem 1 for the special case of deterministic policies. However, this theoretical result also holds for stochastic policies, as shown in Appendix A. Corollary 1 Given a reward function R(s, a, s′), any reward function R̂(s, a, s′) , R(s, a, s) + c is policy invariant to R(s, a, s′) where c is a constant. This is a special case of PBRS where φ(s) is a constant. The following theorem states that ρ-projection maps reward functions that are shaped using PBRS to a single point given sufficiently long trajectories: Theorem 2 Let Rθ and Rθ̂ be reward functions that are policy invariant under the definition in Theorem 1. Then, w.l.o.g., for a given expert trajectory τ with length L, limL→∞ ρτ (θ̂) = ρτ (θ) . (8) Its proof is in Appendix B. In brief, when summing up F (s, a, s′) (from Theorem 1) across the states and actions in a trajectory, most terms cancel out leaving only two terms: (a) φ(s0) which depends on the start state s0 and (b) γLφ(sL) which depends on the end state sL. With a sufficiently large L, the second term reaches zero. Our definition of ρτ (θ) assumes that s0 is the same for all trajectories. As a result, the influence of these two terms and by extension, the influence of the reward shaping function is removed by the ρ-projection. Corollary 2 ρτ (θ̂) = ρτ (θ) if (a) Rθ and Rθ̂ are only state dependent or (b) all τ ′ ∈ F(τ) have the same end state as τ in addition to the same starting state and same length. Its proof is in Appendix C. ρ-Projection of Other Classes of Policy Invariance. There may exist other classes of policy invariant reward functions for a given IRL problem. How does the ρ-projection handle these policy invariant reward functions? We argue that ρ-projection indeed maps all policy invariant reward functions (regardless of their function class) to a single point if (1) holds true. Definition 1 casts the ρ-projection as a function of the likelihood of given (fixed) trajectories. Hence, the ρ-projection is identical for reward functions that are policy invariant since the likelihood of a fixed set of trajectories is the same for such reward functions. The ρ-projection can also be interpreted as a ranking function between the expert demonstrations and uniformly sampled trajectories, as shown in [8]. A high ρ-projection implies a higher preference for expert trajectories over uniformly sampled trajectories with this relative preference decreasing with lower ρ-projection. This ensures that reward functions with similar likelihoods are mapped to nearby points. 3.3 ρ-RBF: Using the ρ-Projection in BO-IRL For simplicity, we have restricted the above discussion to a single expert trajectory τ . In practice, we typically have access to K expert trajectories and can project θ to a K-dimensional vector [ρτk(θ)] K k=1. The similarity of two reward functions can now be assessed by the Euclidean distance between their projected points. In this work, we use a simple RBF kernel after the ρ-projection, which results in the ρ-RBF kernel; other kernels can also be used. Algorithm 2 in Appendix E describes in detail the computations required by the ρ-RBF kernel. With the ρ-RBF kernel, BO-IRL follows standard BO practices with EI as an acquisition function (see Algorithm 1 in Appendix E). BO-IRL can be applied to both discrete and continuous environments, as well as model-based and model-free settings. Fig. 3 illustrates the ρ-projection “in-action” using the Gridworld example. Recall the reward function in this environment is parameterized by θ = {θ0, θ1, θ2}. By varying θ2 (translation) while keeping {θ0, θ1} constant, we generate reward functions that are policy invariant, as per Corollary 1. The yellow stars are two such policy invariant reward functions (with fixed {θ0, θ1} and two different values of θ2) that share identical LIRL (i.e., indicated by color). Fig. 3c shows a PCA-reduced representation of the 20-dimensional ρ-space (i.e., the range of the ρ-projection). These two reward parameters are mapped to a single point. Furthermore, reward parameters that are similar in likelihood (red, blue, and yellow stars) are mapped close to one other. Using the ρ-RBF in BO yields a better posterior and samples, as illustrated in Fig. 2d. 3.4 Related Work Our approach builds upon the methods and tools developed to address IRL, in particular, maximum entropy IRL (ME-IRL) [38]. However, compared to ME-IRL and its deep learning variant: maximum entropy deep IRL (deep ME-IRL) [37], our BO-based approach can reduce the number of (expensive) exact policy evaluations via better exploration. Newer approaches such as guided cost learning (GCL) [12] and adversarial IRL (AIRL) [14] avoid exact policy optimization by approximating the policy using a neural network that is learned along with the reward function. However, the quality of the solution obtained depends on the heuristics used and similar to ME-IRL: These methods return a single solution. In contrast, BO-IRL returns the best-seen reward function (possibly a set) along with the GP posterior which models LIRL. A related approach is Bayesian IRL (BIRL) [32] which incorporates prior information and returns a posterior over reward functions. However, BIRL attempts to obtain the entire posterior and utilizes a random policy walk, which is inefficient. In contrast, BO-IRL focuses on regions with high likelihood. GP-IRL [20] utilizes a GP as the reward function, while we use a GP as a surrogate for LIRL. Compatible reward IRL (CR-IRL) [25] can also retrieve multiple reward functions that are consistent with the policy learned from the demonstrations using behavioral cloning. However, since demonstrations are rarely exhaustive, behavioral cloning can overfit, thus leading to an incorrect policy. Recent work has applied adversarial learning to derive policies, specifically, by generative adversarial imitation learning (GAIL) [16]. However, GAIL directly learns the expert’s policy (rather the a reward function) and is not directly comparable to BO-IRL. 4 Experiments and Discussion In this section, we report on experiments designed to answer two primary questions: Q1 Does BO-IRL with ρ-RBF uncover multiple reward functions consistent with the demonstrations? Q2 Is BO-IRL able to find good solutions compared to other IRL methods while reducing the number of policy optimizations required? Due to space constraints, we focus on the key results obtained. Additional results and plots are available in Appendix F. Setup and Evaluation. Our experiments were conducted using the four environments shown in Fig. 4: two model-based discrete environments, Gridworld and Börlange road network [13], and two model-free continuous environments, Point Mass Maze [14] and Fetch-Reach [31]. Evaluation for the Fetch-Reach task environment was performed by comparing the success rate of the optimal policy πθ̂ obtained from the learned reward θ̂. For the other environments, we have computed the expected sum of rewards (ESOR) which is the average ground truth reward that an agent receives while traversing a trajectory sampled using πθ̂. For BO-IRL, the best-seen reward function is used for the ESOR calculation. More details about the experimental setup is available in Appendix D. BO-IRL Recovers Multiple Regions of High Likelihood. To answer Q1, we examine the GP posteriors learned by BO-IRL (with ρ-RBF kernel) and compare them against Bayesian IRL (BIRL) with uniform prior [32]. BIRL learns a posterior distribution over reward functions, which can also be used to identify regions with high-probability reward functions. Figs. 5a and 5c show that BIRL assigns high probability to reward functions adjacent to the ground truth but ignores other equally probable regions. In contrast, BOIRL has identified multiple regions of high likelihood, as shown in Figs. 5b and 5d. Interestingly, BO-IRL has managed to identify multiple reward functions with lower NLL than the expert’s true reward (as shown by red crosses) in both environments. For instance, the linear “bands” of low NLL values at the bottom of Fig. 5d indicate that the travel patterns of the expert agent in the Börlange road network can be explained by any reward function that correctly trades off the time needed to traverse a road segment with the number of left turns encountered; left-turns incur additional time penalty due to traffic stops. Figs. 6a and 6b show the GP posterior learned by BO-IRL for the two continuous environments. The Fetch-Reach task environment has a discontinuous reward function of the distance threshold and penalty. As seen in Fig. 6a, the reward function space in the Fetch-Reach task environment has multiple disjoint regions of high likelihood, hence making it difficult for traditional IRL algorithms to converge to the true solution. Similarly, multiple regions of high likelihood are also observed in the Point Mass Maze setting (Fig. 6b). BO-IRL Performs Well with Fewer Iterations Relative to Existing Methods. In this section, we describe experimental results related to Q2, i.e., whether BO-IRL is able to find high-quality solutions within a given budget, as compared to other representative state-of-the-art approaches. We compare BO-IRL against BIRL, guided cost learning (GCL) [12] and adversarial IRL (AIRL) [14]. As explained in Appendix D.5, deep ME-IRL [37] has failed to give meaningful results across all the settings and is hence not reported. Note that GCL and AIRL do not use explicit policy evaluations and hence take less computation time. However, they only return a single reward function. As such, they are not directly comparable to BO-IRL, but serve to illustrate the quality of solutions obtained using recent approximate single-reward methods. BO-IRL with RBF and Matérn kernels do not have the overhead of calculating the projection function and therefore has a faster computation time. However, as seen from Fig. 2, these kernels fail to correctly characterize the reward function space correctly. We ran BO-IRL with the RBF, Matérn, and ρ-RBF kernels. Table 1 summarizes the results for Gridworld environment, Börlange road network, and Point Mass Maze. Since no ground truth reward is available for the Börlange road network, we used the reward function in [13] and generated artificial trajectories.2 BO-IRL with ρ-RBF reached expert’s ESOR with fewer iterations than the other tested algorithms across all the settings. BIRL has a higher success rate in Gridworld environment compared to our method; however, it requires a significantly higher number of iterations with each iteration involving expensive exact policy optimization. It is also worth noting that AIRL and GCL are unable to exploit the transition dynamics of the Gridworld environment and Börlange road network settings. This in turn results in unnecessary querying of the environment for additional trajectories to approximate the policy function. BO-IRL is flexible to handle both model-free and model-based environments by an appropriate selection of the policy optimization method. 2BO-IRL was also tested on the real-world trajectories from the Börlange road network dataset; see Fig. 11 in Appendix F.4. Fig. 7c shows that policies obtained from rewards learned using ρ-RBF achieve higher success rates compared to other kernels in the Fetch-Reach task environment.3 Interestingly, the success rate falls in later iterations due to the discovery of reward functions that are consistent with the demonstrations but do not align with the actual goal of the task. For instance, the NLL for Fig. 7b is less than that for Fig. 7a. However, the intention behind this task is clearly better captured by the reward function in Fig. 7a: The distance threshold from the target (blue circle) is small, hence indicating that the robot gripper has to approach the target. In comparison, the reward function in Fig. 7b encodes a large distance threshold, which rewards every action inside the blue circle. These experiments show that “blindly” optimizing NLL can lead to poor policies. The different solutions that are discovered by BO-IRL can be further analyzed downstream to select an appropriate reward function or to tweak state representations. 5 Conclusion and Future Work This paper describes a Bayesian Optimization approach to reward function learning called BO-IRL. At the heart of BO-IRL is our ρ-projection (and the associated ρ-RBF kernel) that enables efficient exploration of the reward function space by explicitly accounting for policy invariance. Experimental results are promising: BO-IRL uncovers multiple reward functions that are consistent with the expert demonstrations while reducing the number of exact policy optimizations. Moving forward, BO-IRL opens up new research avenues for IRL. For example, we plan to extend BO-IRL to handle higher-dimensional reward function spaces, batch modes, federated learning and nonmyopic settings where recently developed techniques (e.g., [10, 11, 17, 18, 21, 33]) may be applied. 3AIRL and GCL were not tested on the Fetch-Reach task environment as the available code was incompatible with the environment. Broader Impact It is important that our autonomous agents operate with the correct objectives to ensure that they exihibit appropriate and trustworthy behavior (ethically, legally, etc.) [19]. This issue is gaining broader significance as autonomous agents are increasingly deployed in real-world settings, e.g., in the form of autonomous vehicles, intelligent assistants for medical diagnosis, and automated traders. However, specifying objectives is difficult, and as this paper motivates, reward function learning via demonstration likelihood optimization may also lead to inappropriate behavior. For example, our experiments with the Fetch-Reach environment shows that apparently “good” solutions in terms of NLL correspond to poor policies. BO-IRL takes one step towards addressing this issue by providing an efficient algorithm for returning more information about potential reward functions in the form of discovered samples and the GP posterior. This approach can help users further iterate to arrive at appropriate reward function, e.g., to avoid policies that cause expected or undesirable behavior. As with other learning methods, there is a risk for misuse. This work does not consider constraints that limit the reward functions that can be learned. As such, users may teach the robots to perform unethical or illegal actions; consider the recent incident where users taught the Microsoft’s chatbot Tay to spout racist and anti-social tweets. With robots that are capable of physical actions, consequences may be more severe, e.g., bad actors may teach the robot to cause both psychological and physical harm. A more subtle problem is that harmful policies may result unintentionally from misuse of BO-IRL, e.g., when the assumptions of the method do not hold. These issues point to potential future work on verification or techniques to enforce constraints in BO-IRL and other IRL algorithms. Acknowledgments and Disclosure of Funding This research/project is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program, Singapore-MIT Alliance for Research and Technology (SMART) Future Urban Mobility (FM) IRG and the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2019-011). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
1. What is the focus and contribution of the paper on inverse reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its application of Bayesian optimization? 3. What are the weaknesses of the paper, especially in terms of comparisons with other works and computational efficiency? 4. How does the reviewer assess the novelty and significance of the proposed method in the field of inverse reinforcement learning? 5. Are there any questions or concerns regarding the experimental results and theoretical analysis presented in the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Thank you for your detailed rebuttal. The comparison to BIRL is appreciated. I'm still unsure if the difference in NLL is due to different likelihoods (boltzman vs. max ent), but I think this is a strong submission and a novel approach. --- This paper proposes a Bayesian optimization approach for learning a variety of reward functions that are likely given demonstrations. Bayesian optimization allows efficient exploration of the reward function space which is important since IRL methods are typically expensive to run. The authors demonstrate that standard Bayesian optimization does not work and that a specialized kernel that respects policy invariance is key to allowing Bayesian optimization to work for IRL. Experimental results show that the proposed method works on a variety of problems and is superior to other methods that only learn a point estimate of the reward function. Strengths Bayesian optimization is a well established field in machine learning that has not been applied directly to reward learning. This paper makes the nice contribution of combining GP Bayesian optimization with maximum entropy IRL. The proposed kernel has interesting theoretical properties and good empirical performance compared to other standard kernels. Bayesian IRL methods are typically computationally intractable due to long mixing times for MCMC and requiring an MDP solver for each proposal evaluation. BO-IRL seems like a nice middle ground between full Bayesian inference and just estimating a MLE reward function as done in most prior work. The experimental results are promising and the visualizations of the reward function space are nice. Also the paper provides nice theoretical results regarding reward shaping and policy invariance. Weaknesses The performance of BO-IRL is shown compared to AIRL and GCL; however, performance is not benchmarked with respect to standard Bayesian IRL. This comparison (at least for the model-based envs) should be added to really understand how this method compares to previous works. Run time results are not included. Since the proposed method requires using an MDP solver in the inner loop this seems like it could significantly slow things down compared to a GAN-style approach like GCL or AIRL. But because BO-IRL uses Bayesian optimization it may be more computationally efficient. It would be beneficial to compare each method's run-time to make a better comparison of performance/computation trade-offs.
NIPS
Title Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization Abstract The problem of inverse reinforcement learning (IRL) is relevant to a variety of tasks including value alignment and robot learning from demonstration. Despite significant algorithmic contributions in recent years, IRL remains an ill-posed problem at its core; multiple reward functions coincide with the observed behavior and the actual reward function is not identifiable without prior knowledge or supplementary information. This paper presents an IRL framework called Bayesian optimization-IRL (BO-IRL) which identifies multiple solutions that are consistent with the expert demonstrations by efficiently exploring the reward function space. BO-IRL achieves this by utilizing Bayesian Optimization along with our newly proposed kernel that (a) projects the parameters of policy invariant reward functions to a single point in a latent space and (b) ensures nearby points in the latent space correspond to reward functions yielding similar likelihoods. This projection allows the use of standard stationary kernels in the latent space to capture the correlations present across the reward function space. Empirical results on synthetic and realworld environments (model-free and model-based) show that BO-IRL discovers multiple reward functions while minimizing the number of expensive exact policy optimizations. 1 Introduction Inverse reinforcement learning (IRL) is the problem of inferring the reward function of a reinforcement learning (RL) agent from its observed behavior [1]. Despite wide-spread application (e.g., [1, 4, 5, 27]), IRL remains a challenging problem. A key difficulty is that IRL is ill-posed; typically, there exist many solutions (reward functions) for which a given behavior is optimal [2, 3, 29] and it is not possible to infer the true reward function from among these alternatives without additional information, such as prior knowledge or more informative demonstrations [9, 15]. Given the ill-posed nature of IRL, we adopt the perspective that an IRL algorithm should characterize the space of solutions rather than output a single answer. Indeed, there is often no one correct solution. Although this approach differs from traditional gradient-based IRL methods [38] and modern deep incarnations that converge to specific solutions in the reward function space (e.g., [12, 14]), it is not entirely unconventional. Previous approaches, notably Bayesian IRL (BIRL) [32], share this view and return a posterior distribution over possible reward functions. However, BIRL and other similar methods [25] are computationally expensive (often due to exact policy optimization steps) or suffer from issues such as overfitting [8]. In this paper, we pursue a novel approach to IRL by using Bayesian optimization (BO) [26] to minimize the negative log-likelihood (NLL) of the expert demonstrations with respect to reward functions. BO is specifically designed for optimizing expensive functions by strategically picking inputs to evaluate and appears to be a natural fit for this task. In addition to the samples procured, the Gaussian process (GP) regression used in BO returns additional information about the discovered 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. reward functions in the form of a GP posterior. Uncertainty estimates of the NLL for each reward function enable downstream analysis and existing methods such as active learning [23] and active teaching [9] can be used to further narrow down these solutions. Given the benefits above, it may appear surprising that BO has not yet been applied to IRL, considering its application to many different domains [35]. A possible reason may be that BO does not work “out-of-the-box” for IRL despite its apparent suitability. Indeed, our initial naïve application of BO to IRL failed to produce good results. Further investigation revealed that standard kernels were unsuitable for representing the covariance structure in the space of reward functions. In particular, they ignore policy invariance [3] where a reward function maintains its optimal policy under certain operations such as linear translation. Leveraging on this insight, we contribute a novel ρ-projection that remedies this problem. Briefly, the ρ-projection maps policy invariant reward functions to a single point in a new representation space where nearby points share similar NLL; Fig. 1 illustrates this key idea on a Gridworld environment.1 With the ρ-projection in hand, standard stationary kernels (such as the popular RBF) can be applied in a straightforward manner. We provide theoretical support for this property and experiments on a variety of environments (both discrete and continuous, with model-based and model-free settings) show that our BO-IRL algorithm (with ρ-projection) efficiently captures the correlation structure of the reward space and outperforms representative state-of-the-art methods. 2 Preliminaries and Background Markov Decision Process (MDP). An MDP is defined by a tupleM : 〈S,A,P,R, γ〉 where S is a finite set of states, A is a finite set of actions, P(s′|s, a) is the conditional probability of next state s′ given current state s and action a, R : S ×A× S → R denotes the reward function, and γ ∈ (0, 1) is the discount factor. An optimal policy π∗ is a policy that maximizes the expected sum of discounted rewards E [ ∑∞ t=0 γ tR(st, at, st+1)|π,M]. The task of finding an optimal policy is referred to as policy optimization. If the MDP is fully known, then policy optimization can be performed via dynamic programming. In model-free settings, RL algorithms such as proximal policy optimization [34] can be used to obtain a policy. Inverse Reinforcement Learning (IRL). Often, it is difficult to manually specify or engineer a reward function. Instead, it may be beneficial to learn it from experts. The problem of inferring the unknown reward function from a set of (near) optimal demonstrations is known as IRL. The learner is 1This Gridworld environment will be our running example throughout this paper. provided with an MDP without a reward function,M\R, and a set T , {τi}Ni=1 of N trajectories. Each trajectory τ , {(st, at)}L−1t=0 is of length L. Similar to prior work, we assume that the reward function can be represented by a real vector θ ∈ Θ ⊆ Rd and is denoted by Rθ(s, a, s′). Overloading our notation, we denote the discounted reward of a trajectory τ as Rθ(τ) , ∑L−1 t=0 γ tRθ(st, at, st+1). In the maximum entropy framework [38], the probability pθ(τ) of a given trajectory is related to its discounted reward as follows: pθ(τ) = exp(Rθ(τ))/Z(θ) (1) where Z(θ) is the partition function that is intractable in most practical scenarios. The optimal parameter θ∗ is given by argminθ LIRL(θ) where LIRL(θ) , − ∑ τ∈T L−2∑ t=0 [log(π∗θ(st, at)) + log(P(st+1|st, at))] (2) is the negative log-likelihood (NLL) and π∗θ is the optimal policy computed using Rθ. 3 Bayesian Optimization-Inverse Reinforcement Learning (BO-IRL) Recall that IRL algorithms take as input an MDPM\R, a space Θ of reward function parameters, and a set T of N expert demonstrations. We follow the maximum entropy framework where the optimal parameter θ∗ is given by argminθ LIRL(θ) and LIRL(θ) takes the form shown in (2). Unfortunately, calculating π∗θ in (2) is expensive, which renders exhaustive exploration of the reward function space infeasible. To mitigate this expense, we propose to leverage Bayesian optimization (BO) [26]. Bayesian optimization is a general sequential strategy for finding a global optimum of an expensive black-box function f : X → R defined on some bounded set X ∈ Rd. In each iteration t = 1, . . . , T , an input query xt ∈ X is selected to evaluate the value of f yielding a noisy output yt , f(xt) + where ∼ N (0, σ2) is i.i.d. Gaussian noise with variance σ2. Since evaluation of f is expensive, a surrogate model is used to strategically select input queries to approach the global minimizer x∗ = argminx∈X f(x). The candidate xt is typically found by maximizing an acquisition function. In this work, we use a Gaussian process (GP) [36] as the surrogate model and expected improvement (EI) [26] as our acquisition function. Gaussian process (GP). A GP is a collection of random variables {f(x)}x∈X where every finite subset follows a multivariate Gaussian distribution. A GP is fully specified by its prior mean µ(x) and covariance k(x,x′) for all x,x′ ∈ X . In typical settings, µ(x) is often set to zero and the kernel function k(x,x′) is the primary ingredient. Given a column vector yT , [yt] > t=1..T of noisy observations of f at inputs x1, . . . ,xT obtained after T evaluations, a GP permits efficient computation of its posterior for any input x. The GP posterior is a Gaussian with posterior mean and variance µT (x) , kT (x) > + (KT + σ 2I)−1yT σ2T (x) , k(x,x)− kT (x) >(KT + σ 2I)−1kT (x) (3) where K , [k(xt,xt′)]t,t′=1,...,T is the kernel matrix and k(x) , [k(xt,x)] > t=1,...,T is the vector of cross-covariances between x and xt. Expected Improvement (EI). EI attempts to find a new candidate input xt at iteration t that maximizes the expected improvement over the best value seen thus far. Given the current GP posterior and xbest , argmaxx∈{x1,...,xt−1} f(x), the next xt is found by maximizing aEI(x) , σt−1(x)[γt−1(x)Φ(γt−1(x)) +N (γt−1(x); 0, 1)] (4) where Φ(x) is the cumulative distribution function of the standard Gaussian and γt(x) , (f(xbest − µt(x))/σt(x) is a Z-score. Specializing BO for IRL. To apply BO to IRL, we set the function f to be the IRL loss, i.e., f(θ) = LIRL(θ), and specify the kernel function k(θ,θ′) in the GP. The latter is a crucial choice; since the kernel encodes the prior covariance structure across the reward parameter space, its specification can have a dramatic impact on search performance. Unfortunately, as we will demonstrate, popular stationary kernels are generally unsuitable for IRL. The remainder of this section details this issue and how we can remedy it via a specially-designed projection. 3.1 Limitations of Standard Stationary Kernels: An Illustrative Example As a first attempt to optimize LIRL using BO, one may opt to parameterize the GP surrogate function with standard stationary kernels, which are functions of θ−θ′. For example, the radial basis function (RBF) kernel is given by kRBF(θ,θ ′) = exp(−‖θ − θ′‖2/2l2) (5) where the lengthscale l captures how far one can reliably extrapolate from a given data point. While simple and popular, the RBF is a poor choice for capturing covariance structure in the reward parameter space. To elaborate, the RBF kernel encodes the notion that reward parameters which are closer together (in terms of squared Euclidean distance) have similar LIRL values. However, this structure does not generally hold true in an IRL setting due to policy invariance; in our Gridworld example, LIRL(θa) is the same as LIRL(θb) despite θa and θb being far apart (see Fig. 1b). Indeed, Fig. 2b illustrates that applying BO with the RBF kernel yields a poor GP posterior approximation to the true NLLs. The same effect can be seen for the Matérn kernel in Fig. 2c. 3.2 Addressing Policy Invariance with the ρ-Projection The key insight of this work is that better exploration can be achieved via an alternative representation of reward functions that mitigates policy invariance associated with IRL [3]. Specifically, we develop the ρ-projection whose key properties are that (a) policy invariant reward functions are mapped to a single point and (b) points that are close in its range correspond to reward functions with similar LIRL. Effectively, the ρ-projection maps reward function parameters into a space where standard stationary kernels are able to capture the covariance between reward functions. For expositional simplicity, let us first consider the special case where we have only one expert demonstration. Definition 1 Consider an MDPM with reward Rθ and a single expert trajectory τ . Let F(τ) be a set of M uniformly sampled trajectories fromM with the same starting state and length as τ . Define the ρ-projection ρτ : Θ→ R as ρτ (θ) , pθ(τ) pθ(τ) + ∑ τ ′∈F(τ) pθ(τ ′) = exp(Rθ(τ)/Z(θ)) exp(Rθ(τ)/Z(θ)) + ∑ τ ′∈F(τ) exp(Rθ(τ ′)/Z(θ)) = exp(Rθ(τ)) exp(Rθ(τ)) + ∑ τ ′∈F(τ) exp(Rθ(τ ′)) . (6) The first equality in (6) is a direct consequence of the assumption that the distribution of trajectories in MDPM follows (1) from the maximum entropy IRL framework. It can be seen from the second equality in (6) that an appealing property of ρ-projection is that the partition function is canceled off from the numerator and denominator, thereby eliminating the need to approximate it. Note that the ρ-projection is not an approximation of p(τ) despite the similar forms. F(τ) in the denominator of ρ-projection is sampled to have the same starting point and length as τ ; as such, it may not cover the space of all trajectories and hence does not approximate Z(θ) even with large M . We will discuss below how the ρ-projection achieves the aforementioned properties. Policy invariance can occur due to multiple causes and we begin our discussion with a common class of policy invariant reward functions, namely, those resulting from potential-based reward shaping (PBRS) [28]. ρ-Projection of PBRS-Based Policy Invariant Reward Functions. Reward shaping is a method used to augment the reward function with additional information (referred to as a shaping function) without changing its optimal policy [24]. Designing a reward shaping function can be thought of as the inverse problem of identifying the underlying cause of policy invariance. Potential-based reward shaping (PBRS) [28] is a popular shaping function that provides theoretical guarantees for single-objective single-agent domains. We summarize the main theoretical result from [28] below: Theorem 1 Consider an MDPM0 : 〈S,A, T, γ,R0〉. We define PBRS F : S ×A× S → R to be a function of the form F (s, a, s′) , γφ(s′)− φ(s) where φ(s) is any function of the form φ : S → R. Then, for all s, s′ ∈ S and a ∈ A, the following transformation fromR0 toR is sufficient to guarantee that every optimal policy inM0 is also optimal in MDPM : 〈S,A, T, γ,R〉: R(s, a, s′) , R0(s, a, s ′) + F (s, a, s′) = R0(s, a, s ′) + γφ(s′)− φ(s) . (7) Remark 1 The work of [28] has proven Theorem 1 for the special case of deterministic policies. However, this theoretical result also holds for stochastic policies, as shown in Appendix A. Corollary 1 Given a reward function R(s, a, s′), any reward function R̂(s, a, s′) , R(s, a, s) + c is policy invariant to R(s, a, s′) where c is a constant. This is a special case of PBRS where φ(s) is a constant. The following theorem states that ρ-projection maps reward functions that are shaped using PBRS to a single point given sufficiently long trajectories: Theorem 2 Let Rθ and Rθ̂ be reward functions that are policy invariant under the definition in Theorem 1. Then, w.l.o.g., for a given expert trajectory τ with length L, limL→∞ ρτ (θ̂) = ρτ (θ) . (8) Its proof is in Appendix B. In brief, when summing up F (s, a, s′) (from Theorem 1) across the states and actions in a trajectory, most terms cancel out leaving only two terms: (a) φ(s0) which depends on the start state s0 and (b) γLφ(sL) which depends on the end state sL. With a sufficiently large L, the second term reaches zero. Our definition of ρτ (θ) assumes that s0 is the same for all trajectories. As a result, the influence of these two terms and by extension, the influence of the reward shaping function is removed by the ρ-projection. Corollary 2 ρτ (θ̂) = ρτ (θ) if (a) Rθ and Rθ̂ are only state dependent or (b) all τ ′ ∈ F(τ) have the same end state as τ in addition to the same starting state and same length. Its proof is in Appendix C. ρ-Projection of Other Classes of Policy Invariance. There may exist other classes of policy invariant reward functions for a given IRL problem. How does the ρ-projection handle these policy invariant reward functions? We argue that ρ-projection indeed maps all policy invariant reward functions (regardless of their function class) to a single point if (1) holds true. Definition 1 casts the ρ-projection as a function of the likelihood of given (fixed) trajectories. Hence, the ρ-projection is identical for reward functions that are policy invariant since the likelihood of a fixed set of trajectories is the same for such reward functions. The ρ-projection can also be interpreted as a ranking function between the expert demonstrations and uniformly sampled trajectories, as shown in [8]. A high ρ-projection implies a higher preference for expert trajectories over uniformly sampled trajectories with this relative preference decreasing with lower ρ-projection. This ensures that reward functions with similar likelihoods are mapped to nearby points. 3.3 ρ-RBF: Using the ρ-Projection in BO-IRL For simplicity, we have restricted the above discussion to a single expert trajectory τ . In practice, we typically have access to K expert trajectories and can project θ to a K-dimensional vector [ρτk(θ)] K k=1. The similarity of two reward functions can now be assessed by the Euclidean distance between their projected points. In this work, we use a simple RBF kernel after the ρ-projection, which results in the ρ-RBF kernel; other kernels can also be used. Algorithm 2 in Appendix E describes in detail the computations required by the ρ-RBF kernel. With the ρ-RBF kernel, BO-IRL follows standard BO practices with EI as an acquisition function (see Algorithm 1 in Appendix E). BO-IRL can be applied to both discrete and continuous environments, as well as model-based and model-free settings. Fig. 3 illustrates the ρ-projection “in-action” using the Gridworld example. Recall the reward function in this environment is parameterized by θ = {θ0, θ1, θ2}. By varying θ2 (translation) while keeping {θ0, θ1} constant, we generate reward functions that are policy invariant, as per Corollary 1. The yellow stars are two such policy invariant reward functions (with fixed {θ0, θ1} and two different values of θ2) that share identical LIRL (i.e., indicated by color). Fig. 3c shows a PCA-reduced representation of the 20-dimensional ρ-space (i.e., the range of the ρ-projection). These two reward parameters are mapped to a single point. Furthermore, reward parameters that are similar in likelihood (red, blue, and yellow stars) are mapped close to one other. Using the ρ-RBF in BO yields a better posterior and samples, as illustrated in Fig. 2d. 3.4 Related Work Our approach builds upon the methods and tools developed to address IRL, in particular, maximum entropy IRL (ME-IRL) [38]. However, compared to ME-IRL and its deep learning variant: maximum entropy deep IRL (deep ME-IRL) [37], our BO-based approach can reduce the number of (expensive) exact policy evaluations via better exploration. Newer approaches such as guided cost learning (GCL) [12] and adversarial IRL (AIRL) [14] avoid exact policy optimization by approximating the policy using a neural network that is learned along with the reward function. However, the quality of the solution obtained depends on the heuristics used and similar to ME-IRL: These methods return a single solution. In contrast, BO-IRL returns the best-seen reward function (possibly a set) along with the GP posterior which models LIRL. A related approach is Bayesian IRL (BIRL) [32] which incorporates prior information and returns a posterior over reward functions. However, BIRL attempts to obtain the entire posterior and utilizes a random policy walk, which is inefficient. In contrast, BO-IRL focuses on regions with high likelihood. GP-IRL [20] utilizes a GP as the reward function, while we use a GP as a surrogate for LIRL. Compatible reward IRL (CR-IRL) [25] can also retrieve multiple reward functions that are consistent with the policy learned from the demonstrations using behavioral cloning. However, since demonstrations are rarely exhaustive, behavioral cloning can overfit, thus leading to an incorrect policy. Recent work has applied adversarial learning to derive policies, specifically, by generative adversarial imitation learning (GAIL) [16]. However, GAIL directly learns the expert’s policy (rather the a reward function) and is not directly comparable to BO-IRL. 4 Experiments and Discussion In this section, we report on experiments designed to answer two primary questions: Q1 Does BO-IRL with ρ-RBF uncover multiple reward functions consistent with the demonstrations? Q2 Is BO-IRL able to find good solutions compared to other IRL methods while reducing the number of policy optimizations required? Due to space constraints, we focus on the key results obtained. Additional results and plots are available in Appendix F. Setup and Evaluation. Our experiments were conducted using the four environments shown in Fig. 4: two model-based discrete environments, Gridworld and Börlange road network [13], and two model-free continuous environments, Point Mass Maze [14] and Fetch-Reach [31]. Evaluation for the Fetch-Reach task environment was performed by comparing the success rate of the optimal policy πθ̂ obtained from the learned reward θ̂. For the other environments, we have computed the expected sum of rewards (ESOR) which is the average ground truth reward that an agent receives while traversing a trajectory sampled using πθ̂. For BO-IRL, the best-seen reward function is used for the ESOR calculation. More details about the experimental setup is available in Appendix D. BO-IRL Recovers Multiple Regions of High Likelihood. To answer Q1, we examine the GP posteriors learned by BO-IRL (with ρ-RBF kernel) and compare them against Bayesian IRL (BIRL) with uniform prior [32]. BIRL learns a posterior distribution over reward functions, which can also be used to identify regions with high-probability reward functions. Figs. 5a and 5c show that BIRL assigns high probability to reward functions adjacent to the ground truth but ignores other equally probable regions. In contrast, BOIRL has identified multiple regions of high likelihood, as shown in Figs. 5b and 5d. Interestingly, BO-IRL has managed to identify multiple reward functions with lower NLL than the expert’s true reward (as shown by red crosses) in both environments. For instance, the linear “bands” of low NLL values at the bottom of Fig. 5d indicate that the travel patterns of the expert agent in the Börlange road network can be explained by any reward function that correctly trades off the time needed to traverse a road segment with the number of left turns encountered; left-turns incur additional time penalty due to traffic stops. Figs. 6a and 6b show the GP posterior learned by BO-IRL for the two continuous environments. The Fetch-Reach task environment has a discontinuous reward function of the distance threshold and penalty. As seen in Fig. 6a, the reward function space in the Fetch-Reach task environment has multiple disjoint regions of high likelihood, hence making it difficult for traditional IRL algorithms to converge to the true solution. Similarly, multiple regions of high likelihood are also observed in the Point Mass Maze setting (Fig. 6b). BO-IRL Performs Well with Fewer Iterations Relative to Existing Methods. In this section, we describe experimental results related to Q2, i.e., whether BO-IRL is able to find high-quality solutions within a given budget, as compared to other representative state-of-the-art approaches. We compare BO-IRL against BIRL, guided cost learning (GCL) [12] and adversarial IRL (AIRL) [14]. As explained in Appendix D.5, deep ME-IRL [37] has failed to give meaningful results across all the settings and is hence not reported. Note that GCL and AIRL do not use explicit policy evaluations and hence take less computation time. However, they only return a single reward function. As such, they are not directly comparable to BO-IRL, but serve to illustrate the quality of solutions obtained using recent approximate single-reward methods. BO-IRL with RBF and Matérn kernels do not have the overhead of calculating the projection function and therefore has a faster computation time. However, as seen from Fig. 2, these kernels fail to correctly characterize the reward function space correctly. We ran BO-IRL with the RBF, Matérn, and ρ-RBF kernels. Table 1 summarizes the results for Gridworld environment, Börlange road network, and Point Mass Maze. Since no ground truth reward is available for the Börlange road network, we used the reward function in [13] and generated artificial trajectories.2 BO-IRL with ρ-RBF reached expert’s ESOR with fewer iterations than the other tested algorithms across all the settings. BIRL has a higher success rate in Gridworld environment compared to our method; however, it requires a significantly higher number of iterations with each iteration involving expensive exact policy optimization. It is also worth noting that AIRL and GCL are unable to exploit the transition dynamics of the Gridworld environment and Börlange road network settings. This in turn results in unnecessary querying of the environment for additional trajectories to approximate the policy function. BO-IRL is flexible to handle both model-free and model-based environments by an appropriate selection of the policy optimization method. 2BO-IRL was also tested on the real-world trajectories from the Börlange road network dataset; see Fig. 11 in Appendix F.4. Fig. 7c shows that policies obtained from rewards learned using ρ-RBF achieve higher success rates compared to other kernels in the Fetch-Reach task environment.3 Interestingly, the success rate falls in later iterations due to the discovery of reward functions that are consistent with the demonstrations but do not align with the actual goal of the task. For instance, the NLL for Fig. 7b is less than that for Fig. 7a. However, the intention behind this task is clearly better captured by the reward function in Fig. 7a: The distance threshold from the target (blue circle) is small, hence indicating that the robot gripper has to approach the target. In comparison, the reward function in Fig. 7b encodes a large distance threshold, which rewards every action inside the blue circle. These experiments show that “blindly” optimizing NLL can lead to poor policies. The different solutions that are discovered by BO-IRL can be further analyzed downstream to select an appropriate reward function or to tweak state representations. 5 Conclusion and Future Work This paper describes a Bayesian Optimization approach to reward function learning called BO-IRL. At the heart of BO-IRL is our ρ-projection (and the associated ρ-RBF kernel) that enables efficient exploration of the reward function space by explicitly accounting for policy invariance. Experimental results are promising: BO-IRL uncovers multiple reward functions that are consistent with the expert demonstrations while reducing the number of exact policy optimizations. Moving forward, BO-IRL opens up new research avenues for IRL. For example, we plan to extend BO-IRL to handle higher-dimensional reward function spaces, batch modes, federated learning and nonmyopic settings where recently developed techniques (e.g., [10, 11, 17, 18, 21, 33]) may be applied. 3AIRL and GCL were not tested on the Fetch-Reach task environment as the available code was incompatible with the environment. Broader Impact It is important that our autonomous agents operate with the correct objectives to ensure that they exihibit appropriate and trustworthy behavior (ethically, legally, etc.) [19]. This issue is gaining broader significance as autonomous agents are increasingly deployed in real-world settings, e.g., in the form of autonomous vehicles, intelligent assistants for medical diagnosis, and automated traders. However, specifying objectives is difficult, and as this paper motivates, reward function learning via demonstration likelihood optimization may also lead to inappropriate behavior. For example, our experiments with the Fetch-Reach environment shows that apparently “good” solutions in terms of NLL correspond to poor policies. BO-IRL takes one step towards addressing this issue by providing an efficient algorithm for returning more information about potential reward functions in the form of discovered samples and the GP posterior. This approach can help users further iterate to arrive at appropriate reward function, e.g., to avoid policies that cause expected or undesirable behavior. As with other learning methods, there is a risk for misuse. This work does not consider constraints that limit the reward functions that can be learned. As such, users may teach the robots to perform unethical or illegal actions; consider the recent incident where users taught the Microsoft’s chatbot Tay to spout racist and anti-social tweets. With robots that are capable of physical actions, consequences may be more severe, e.g., bad actors may teach the robot to cause both psychological and physical harm. A more subtle problem is that harmful policies may result unintentionally from misuse of BO-IRL, e.g., when the assumptions of the method do not hold. These issues point to potential future work on verification or techniques to enforce constraints in BO-IRL and other IRL algorithms. Acknowledgments and Disclosure of Funding This research/project is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program, Singapore-MIT Alliance for Research and Technology (SMART) Future Urban Mobility (FM) IRG and the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2019-011). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
1. What is the focus and contribution of the paper regarding inverse reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its elegance and novelty? 3. Do you have any concerns or suggestions for improving the paper, such as discussing the impact of the proposed projection on other forms of policy invariance?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper addresses the problem of inverse reinforcement learning. The main contribution of the paper is a projection operator that is invariant under potential-based shaping transformations. In other words, two rewards that are related through reward shaping using a potential-function are projected into the same point in R^n. In rough terms, given K expert trajectories, a reward is projected into a K-dimensional vector that measures the (max-entropy) likelihood of the observed trajectories in comparison with that of uniformly sampled trajectories. The proposed projection is then used within a Bayesian optimization setup to recover the reward function that maximizes the likelihood of the observed expert trajectories. The paper uses a standard Bayesian optimization setup using a Gaussian process as a surrogate model and expected improvement as acquisition function. The Gaussian process is defined over the space of projected rewards, thus leveraging the aforementioned projection operator. Strengths The paper a very relevant topic to the NIPS community---namely, inverse reinforcement learning. It is well-written, and the contributions proposed are elegant and, to the extent of my knowledge, novel. I really liked this paper. Weaknesses Perhaps the one aspect that I would like to see better discussed is the impact of the proposed projection with forms of policy invariance other than those obtained by potential-based reward shaping (more on this in the detailed comments).
NIPS
Title Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization Abstract The problem of inverse reinforcement learning (IRL) is relevant to a variety of tasks including value alignment and robot learning from demonstration. Despite significant algorithmic contributions in recent years, IRL remains an ill-posed problem at its core; multiple reward functions coincide with the observed behavior and the actual reward function is not identifiable without prior knowledge or supplementary information. This paper presents an IRL framework called Bayesian optimization-IRL (BO-IRL) which identifies multiple solutions that are consistent with the expert demonstrations by efficiently exploring the reward function space. BO-IRL achieves this by utilizing Bayesian Optimization along with our newly proposed kernel that (a) projects the parameters of policy invariant reward functions to a single point in a latent space and (b) ensures nearby points in the latent space correspond to reward functions yielding similar likelihoods. This projection allows the use of standard stationary kernels in the latent space to capture the correlations present across the reward function space. Empirical results on synthetic and realworld environments (model-free and model-based) show that BO-IRL discovers multiple reward functions while minimizing the number of expensive exact policy optimizations. 1 Introduction Inverse reinforcement learning (IRL) is the problem of inferring the reward function of a reinforcement learning (RL) agent from its observed behavior [1]. Despite wide-spread application (e.g., [1, 4, 5, 27]), IRL remains a challenging problem. A key difficulty is that IRL is ill-posed; typically, there exist many solutions (reward functions) for which a given behavior is optimal [2, 3, 29] and it is not possible to infer the true reward function from among these alternatives without additional information, such as prior knowledge or more informative demonstrations [9, 15]. Given the ill-posed nature of IRL, we adopt the perspective that an IRL algorithm should characterize the space of solutions rather than output a single answer. Indeed, there is often no one correct solution. Although this approach differs from traditional gradient-based IRL methods [38] and modern deep incarnations that converge to specific solutions in the reward function space (e.g., [12, 14]), it is not entirely unconventional. Previous approaches, notably Bayesian IRL (BIRL) [32], share this view and return a posterior distribution over possible reward functions. However, BIRL and other similar methods [25] are computationally expensive (often due to exact policy optimization steps) or suffer from issues such as overfitting [8]. In this paper, we pursue a novel approach to IRL by using Bayesian optimization (BO) [26] to minimize the negative log-likelihood (NLL) of the expert demonstrations with respect to reward functions. BO is specifically designed for optimizing expensive functions by strategically picking inputs to evaluate and appears to be a natural fit for this task. In addition to the samples procured, the Gaussian process (GP) regression used in BO returns additional information about the discovered 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. reward functions in the form of a GP posterior. Uncertainty estimates of the NLL for each reward function enable downstream analysis and existing methods such as active learning [23] and active teaching [9] can be used to further narrow down these solutions. Given the benefits above, it may appear surprising that BO has not yet been applied to IRL, considering its application to many different domains [35]. A possible reason may be that BO does not work “out-of-the-box” for IRL despite its apparent suitability. Indeed, our initial naïve application of BO to IRL failed to produce good results. Further investigation revealed that standard kernels were unsuitable for representing the covariance structure in the space of reward functions. In particular, they ignore policy invariance [3] where a reward function maintains its optimal policy under certain operations such as linear translation. Leveraging on this insight, we contribute a novel ρ-projection that remedies this problem. Briefly, the ρ-projection maps policy invariant reward functions to a single point in a new representation space where nearby points share similar NLL; Fig. 1 illustrates this key idea on a Gridworld environment.1 With the ρ-projection in hand, standard stationary kernels (such as the popular RBF) can be applied in a straightforward manner. We provide theoretical support for this property and experiments on a variety of environments (both discrete and continuous, with model-based and model-free settings) show that our BO-IRL algorithm (with ρ-projection) efficiently captures the correlation structure of the reward space and outperforms representative state-of-the-art methods. 2 Preliminaries and Background Markov Decision Process (MDP). An MDP is defined by a tupleM : 〈S,A,P,R, γ〉 where S is a finite set of states, A is a finite set of actions, P(s′|s, a) is the conditional probability of next state s′ given current state s and action a, R : S ×A× S → R denotes the reward function, and γ ∈ (0, 1) is the discount factor. An optimal policy π∗ is a policy that maximizes the expected sum of discounted rewards E [ ∑∞ t=0 γ tR(st, at, st+1)|π,M]. The task of finding an optimal policy is referred to as policy optimization. If the MDP is fully known, then policy optimization can be performed via dynamic programming. In model-free settings, RL algorithms such as proximal policy optimization [34] can be used to obtain a policy. Inverse Reinforcement Learning (IRL). Often, it is difficult to manually specify or engineer a reward function. Instead, it may be beneficial to learn it from experts. The problem of inferring the unknown reward function from a set of (near) optimal demonstrations is known as IRL. The learner is 1This Gridworld environment will be our running example throughout this paper. provided with an MDP without a reward function,M\R, and a set T , {τi}Ni=1 of N trajectories. Each trajectory τ , {(st, at)}L−1t=0 is of length L. Similar to prior work, we assume that the reward function can be represented by a real vector θ ∈ Θ ⊆ Rd and is denoted by Rθ(s, a, s′). Overloading our notation, we denote the discounted reward of a trajectory τ as Rθ(τ) , ∑L−1 t=0 γ tRθ(st, at, st+1). In the maximum entropy framework [38], the probability pθ(τ) of a given trajectory is related to its discounted reward as follows: pθ(τ) = exp(Rθ(τ))/Z(θ) (1) where Z(θ) is the partition function that is intractable in most practical scenarios. The optimal parameter θ∗ is given by argminθ LIRL(θ) where LIRL(θ) , − ∑ τ∈T L−2∑ t=0 [log(π∗θ(st, at)) + log(P(st+1|st, at))] (2) is the negative log-likelihood (NLL) and π∗θ is the optimal policy computed using Rθ. 3 Bayesian Optimization-Inverse Reinforcement Learning (BO-IRL) Recall that IRL algorithms take as input an MDPM\R, a space Θ of reward function parameters, and a set T of N expert demonstrations. We follow the maximum entropy framework where the optimal parameter θ∗ is given by argminθ LIRL(θ) and LIRL(θ) takes the form shown in (2). Unfortunately, calculating π∗θ in (2) is expensive, which renders exhaustive exploration of the reward function space infeasible. To mitigate this expense, we propose to leverage Bayesian optimization (BO) [26]. Bayesian optimization is a general sequential strategy for finding a global optimum of an expensive black-box function f : X → R defined on some bounded set X ∈ Rd. In each iteration t = 1, . . . , T , an input query xt ∈ X is selected to evaluate the value of f yielding a noisy output yt , f(xt) + where ∼ N (0, σ2) is i.i.d. Gaussian noise with variance σ2. Since evaluation of f is expensive, a surrogate model is used to strategically select input queries to approach the global minimizer x∗ = argminx∈X f(x). The candidate xt is typically found by maximizing an acquisition function. In this work, we use a Gaussian process (GP) [36] as the surrogate model and expected improvement (EI) [26] as our acquisition function. Gaussian process (GP). A GP is a collection of random variables {f(x)}x∈X where every finite subset follows a multivariate Gaussian distribution. A GP is fully specified by its prior mean µ(x) and covariance k(x,x′) for all x,x′ ∈ X . In typical settings, µ(x) is often set to zero and the kernel function k(x,x′) is the primary ingredient. Given a column vector yT , [yt] > t=1..T of noisy observations of f at inputs x1, . . . ,xT obtained after T evaluations, a GP permits efficient computation of its posterior for any input x. The GP posterior is a Gaussian with posterior mean and variance µT (x) , kT (x) > + (KT + σ 2I)−1yT σ2T (x) , k(x,x)− kT (x) >(KT + σ 2I)−1kT (x) (3) where K , [k(xt,xt′)]t,t′=1,...,T is the kernel matrix and k(x) , [k(xt,x)] > t=1,...,T is the vector of cross-covariances between x and xt. Expected Improvement (EI). EI attempts to find a new candidate input xt at iteration t that maximizes the expected improvement over the best value seen thus far. Given the current GP posterior and xbest , argmaxx∈{x1,...,xt−1} f(x), the next xt is found by maximizing aEI(x) , σt−1(x)[γt−1(x)Φ(γt−1(x)) +N (γt−1(x); 0, 1)] (4) where Φ(x) is the cumulative distribution function of the standard Gaussian and γt(x) , (f(xbest − µt(x))/σt(x) is a Z-score. Specializing BO for IRL. To apply BO to IRL, we set the function f to be the IRL loss, i.e., f(θ) = LIRL(θ), and specify the kernel function k(θ,θ′) in the GP. The latter is a crucial choice; since the kernel encodes the prior covariance structure across the reward parameter space, its specification can have a dramatic impact on search performance. Unfortunately, as we will demonstrate, popular stationary kernels are generally unsuitable for IRL. The remainder of this section details this issue and how we can remedy it via a specially-designed projection. 3.1 Limitations of Standard Stationary Kernels: An Illustrative Example As a first attempt to optimize LIRL using BO, one may opt to parameterize the GP surrogate function with standard stationary kernels, which are functions of θ−θ′. For example, the radial basis function (RBF) kernel is given by kRBF(θ,θ ′) = exp(−‖θ − θ′‖2/2l2) (5) where the lengthscale l captures how far one can reliably extrapolate from a given data point. While simple and popular, the RBF is a poor choice for capturing covariance structure in the reward parameter space. To elaborate, the RBF kernel encodes the notion that reward parameters which are closer together (in terms of squared Euclidean distance) have similar LIRL values. However, this structure does not generally hold true in an IRL setting due to policy invariance; in our Gridworld example, LIRL(θa) is the same as LIRL(θb) despite θa and θb being far apart (see Fig. 1b). Indeed, Fig. 2b illustrates that applying BO with the RBF kernel yields a poor GP posterior approximation to the true NLLs. The same effect can be seen for the Matérn kernel in Fig. 2c. 3.2 Addressing Policy Invariance with the ρ-Projection The key insight of this work is that better exploration can be achieved via an alternative representation of reward functions that mitigates policy invariance associated with IRL [3]. Specifically, we develop the ρ-projection whose key properties are that (a) policy invariant reward functions are mapped to a single point and (b) points that are close in its range correspond to reward functions with similar LIRL. Effectively, the ρ-projection maps reward function parameters into a space where standard stationary kernels are able to capture the covariance between reward functions. For expositional simplicity, let us first consider the special case where we have only one expert demonstration. Definition 1 Consider an MDPM with reward Rθ and a single expert trajectory τ . Let F(τ) be a set of M uniformly sampled trajectories fromM with the same starting state and length as τ . Define the ρ-projection ρτ : Θ→ R as ρτ (θ) , pθ(τ) pθ(τ) + ∑ τ ′∈F(τ) pθ(τ ′) = exp(Rθ(τ)/Z(θ)) exp(Rθ(τ)/Z(θ)) + ∑ τ ′∈F(τ) exp(Rθ(τ ′)/Z(θ)) = exp(Rθ(τ)) exp(Rθ(τ)) + ∑ τ ′∈F(τ) exp(Rθ(τ ′)) . (6) The first equality in (6) is a direct consequence of the assumption that the distribution of trajectories in MDPM follows (1) from the maximum entropy IRL framework. It can be seen from the second equality in (6) that an appealing property of ρ-projection is that the partition function is canceled off from the numerator and denominator, thereby eliminating the need to approximate it. Note that the ρ-projection is not an approximation of p(τ) despite the similar forms. F(τ) in the denominator of ρ-projection is sampled to have the same starting point and length as τ ; as such, it may not cover the space of all trajectories and hence does not approximate Z(θ) even with large M . We will discuss below how the ρ-projection achieves the aforementioned properties. Policy invariance can occur due to multiple causes and we begin our discussion with a common class of policy invariant reward functions, namely, those resulting from potential-based reward shaping (PBRS) [28]. ρ-Projection of PBRS-Based Policy Invariant Reward Functions. Reward shaping is a method used to augment the reward function with additional information (referred to as a shaping function) without changing its optimal policy [24]. Designing a reward shaping function can be thought of as the inverse problem of identifying the underlying cause of policy invariance. Potential-based reward shaping (PBRS) [28] is a popular shaping function that provides theoretical guarantees for single-objective single-agent domains. We summarize the main theoretical result from [28] below: Theorem 1 Consider an MDPM0 : 〈S,A, T, γ,R0〉. We define PBRS F : S ×A× S → R to be a function of the form F (s, a, s′) , γφ(s′)− φ(s) where φ(s) is any function of the form φ : S → R. Then, for all s, s′ ∈ S and a ∈ A, the following transformation fromR0 toR is sufficient to guarantee that every optimal policy inM0 is also optimal in MDPM : 〈S,A, T, γ,R〉: R(s, a, s′) , R0(s, a, s ′) + F (s, a, s′) = R0(s, a, s ′) + γφ(s′)− φ(s) . (7) Remark 1 The work of [28] has proven Theorem 1 for the special case of deterministic policies. However, this theoretical result also holds for stochastic policies, as shown in Appendix A. Corollary 1 Given a reward function R(s, a, s′), any reward function R̂(s, a, s′) , R(s, a, s) + c is policy invariant to R(s, a, s′) where c is a constant. This is a special case of PBRS where φ(s) is a constant. The following theorem states that ρ-projection maps reward functions that are shaped using PBRS to a single point given sufficiently long trajectories: Theorem 2 Let Rθ and Rθ̂ be reward functions that are policy invariant under the definition in Theorem 1. Then, w.l.o.g., for a given expert trajectory τ with length L, limL→∞ ρτ (θ̂) = ρτ (θ) . (8) Its proof is in Appendix B. In brief, when summing up F (s, a, s′) (from Theorem 1) across the states and actions in a trajectory, most terms cancel out leaving only two terms: (a) φ(s0) which depends on the start state s0 and (b) γLφ(sL) which depends on the end state sL. With a sufficiently large L, the second term reaches zero. Our definition of ρτ (θ) assumes that s0 is the same for all trajectories. As a result, the influence of these two terms and by extension, the influence of the reward shaping function is removed by the ρ-projection. Corollary 2 ρτ (θ̂) = ρτ (θ) if (a) Rθ and Rθ̂ are only state dependent or (b) all τ ′ ∈ F(τ) have the same end state as τ in addition to the same starting state and same length. Its proof is in Appendix C. ρ-Projection of Other Classes of Policy Invariance. There may exist other classes of policy invariant reward functions for a given IRL problem. How does the ρ-projection handle these policy invariant reward functions? We argue that ρ-projection indeed maps all policy invariant reward functions (regardless of their function class) to a single point if (1) holds true. Definition 1 casts the ρ-projection as a function of the likelihood of given (fixed) trajectories. Hence, the ρ-projection is identical for reward functions that are policy invariant since the likelihood of a fixed set of trajectories is the same for such reward functions. The ρ-projection can also be interpreted as a ranking function between the expert demonstrations and uniformly sampled trajectories, as shown in [8]. A high ρ-projection implies a higher preference for expert trajectories over uniformly sampled trajectories with this relative preference decreasing with lower ρ-projection. This ensures that reward functions with similar likelihoods are mapped to nearby points. 3.3 ρ-RBF: Using the ρ-Projection in BO-IRL For simplicity, we have restricted the above discussion to a single expert trajectory τ . In practice, we typically have access to K expert trajectories and can project θ to a K-dimensional vector [ρτk(θ)] K k=1. The similarity of two reward functions can now be assessed by the Euclidean distance between their projected points. In this work, we use a simple RBF kernel after the ρ-projection, which results in the ρ-RBF kernel; other kernels can also be used. Algorithm 2 in Appendix E describes in detail the computations required by the ρ-RBF kernel. With the ρ-RBF kernel, BO-IRL follows standard BO practices with EI as an acquisition function (see Algorithm 1 in Appendix E). BO-IRL can be applied to both discrete and continuous environments, as well as model-based and model-free settings. Fig. 3 illustrates the ρ-projection “in-action” using the Gridworld example. Recall the reward function in this environment is parameterized by θ = {θ0, θ1, θ2}. By varying θ2 (translation) while keeping {θ0, θ1} constant, we generate reward functions that are policy invariant, as per Corollary 1. The yellow stars are two such policy invariant reward functions (with fixed {θ0, θ1} and two different values of θ2) that share identical LIRL (i.e., indicated by color). Fig. 3c shows a PCA-reduced representation of the 20-dimensional ρ-space (i.e., the range of the ρ-projection). These two reward parameters are mapped to a single point. Furthermore, reward parameters that are similar in likelihood (red, blue, and yellow stars) are mapped close to one other. Using the ρ-RBF in BO yields a better posterior and samples, as illustrated in Fig. 2d. 3.4 Related Work Our approach builds upon the methods and tools developed to address IRL, in particular, maximum entropy IRL (ME-IRL) [38]. However, compared to ME-IRL and its deep learning variant: maximum entropy deep IRL (deep ME-IRL) [37], our BO-based approach can reduce the number of (expensive) exact policy evaluations via better exploration. Newer approaches such as guided cost learning (GCL) [12] and adversarial IRL (AIRL) [14] avoid exact policy optimization by approximating the policy using a neural network that is learned along with the reward function. However, the quality of the solution obtained depends on the heuristics used and similar to ME-IRL: These methods return a single solution. In contrast, BO-IRL returns the best-seen reward function (possibly a set) along with the GP posterior which models LIRL. A related approach is Bayesian IRL (BIRL) [32] which incorporates prior information and returns a posterior over reward functions. However, BIRL attempts to obtain the entire posterior and utilizes a random policy walk, which is inefficient. In contrast, BO-IRL focuses on regions with high likelihood. GP-IRL [20] utilizes a GP as the reward function, while we use a GP as a surrogate for LIRL. Compatible reward IRL (CR-IRL) [25] can also retrieve multiple reward functions that are consistent with the policy learned from the demonstrations using behavioral cloning. However, since demonstrations are rarely exhaustive, behavioral cloning can overfit, thus leading to an incorrect policy. Recent work has applied adversarial learning to derive policies, specifically, by generative adversarial imitation learning (GAIL) [16]. However, GAIL directly learns the expert’s policy (rather the a reward function) and is not directly comparable to BO-IRL. 4 Experiments and Discussion In this section, we report on experiments designed to answer two primary questions: Q1 Does BO-IRL with ρ-RBF uncover multiple reward functions consistent with the demonstrations? Q2 Is BO-IRL able to find good solutions compared to other IRL methods while reducing the number of policy optimizations required? Due to space constraints, we focus on the key results obtained. Additional results and plots are available in Appendix F. Setup and Evaluation. Our experiments were conducted using the four environments shown in Fig. 4: two model-based discrete environments, Gridworld and Börlange road network [13], and two model-free continuous environments, Point Mass Maze [14] and Fetch-Reach [31]. Evaluation for the Fetch-Reach task environment was performed by comparing the success rate of the optimal policy πθ̂ obtained from the learned reward θ̂. For the other environments, we have computed the expected sum of rewards (ESOR) which is the average ground truth reward that an agent receives while traversing a trajectory sampled using πθ̂. For BO-IRL, the best-seen reward function is used for the ESOR calculation. More details about the experimental setup is available in Appendix D. BO-IRL Recovers Multiple Regions of High Likelihood. To answer Q1, we examine the GP posteriors learned by BO-IRL (with ρ-RBF kernel) and compare them against Bayesian IRL (BIRL) with uniform prior [32]. BIRL learns a posterior distribution over reward functions, which can also be used to identify regions with high-probability reward functions. Figs. 5a and 5c show that BIRL assigns high probability to reward functions adjacent to the ground truth but ignores other equally probable regions. In contrast, BOIRL has identified multiple regions of high likelihood, as shown in Figs. 5b and 5d. Interestingly, BO-IRL has managed to identify multiple reward functions with lower NLL than the expert’s true reward (as shown by red crosses) in both environments. For instance, the linear “bands” of low NLL values at the bottom of Fig. 5d indicate that the travel patterns of the expert agent in the Börlange road network can be explained by any reward function that correctly trades off the time needed to traverse a road segment with the number of left turns encountered; left-turns incur additional time penalty due to traffic stops. Figs. 6a and 6b show the GP posterior learned by BO-IRL for the two continuous environments. The Fetch-Reach task environment has a discontinuous reward function of the distance threshold and penalty. As seen in Fig. 6a, the reward function space in the Fetch-Reach task environment has multiple disjoint regions of high likelihood, hence making it difficult for traditional IRL algorithms to converge to the true solution. Similarly, multiple regions of high likelihood are also observed in the Point Mass Maze setting (Fig. 6b). BO-IRL Performs Well with Fewer Iterations Relative to Existing Methods. In this section, we describe experimental results related to Q2, i.e., whether BO-IRL is able to find high-quality solutions within a given budget, as compared to other representative state-of-the-art approaches. We compare BO-IRL against BIRL, guided cost learning (GCL) [12] and adversarial IRL (AIRL) [14]. As explained in Appendix D.5, deep ME-IRL [37] has failed to give meaningful results across all the settings and is hence not reported. Note that GCL and AIRL do not use explicit policy evaluations and hence take less computation time. However, they only return a single reward function. As such, they are not directly comparable to BO-IRL, but serve to illustrate the quality of solutions obtained using recent approximate single-reward methods. BO-IRL with RBF and Matérn kernels do not have the overhead of calculating the projection function and therefore has a faster computation time. However, as seen from Fig. 2, these kernels fail to correctly characterize the reward function space correctly. We ran BO-IRL with the RBF, Matérn, and ρ-RBF kernels. Table 1 summarizes the results for Gridworld environment, Börlange road network, and Point Mass Maze. Since no ground truth reward is available for the Börlange road network, we used the reward function in [13] and generated artificial trajectories.2 BO-IRL with ρ-RBF reached expert’s ESOR with fewer iterations than the other tested algorithms across all the settings. BIRL has a higher success rate in Gridworld environment compared to our method; however, it requires a significantly higher number of iterations with each iteration involving expensive exact policy optimization. It is also worth noting that AIRL and GCL are unable to exploit the transition dynamics of the Gridworld environment and Börlange road network settings. This in turn results in unnecessary querying of the environment for additional trajectories to approximate the policy function. BO-IRL is flexible to handle both model-free and model-based environments by an appropriate selection of the policy optimization method. 2BO-IRL was also tested on the real-world trajectories from the Börlange road network dataset; see Fig. 11 in Appendix F.4. Fig. 7c shows that policies obtained from rewards learned using ρ-RBF achieve higher success rates compared to other kernels in the Fetch-Reach task environment.3 Interestingly, the success rate falls in later iterations due to the discovery of reward functions that are consistent with the demonstrations but do not align with the actual goal of the task. For instance, the NLL for Fig. 7b is less than that for Fig. 7a. However, the intention behind this task is clearly better captured by the reward function in Fig. 7a: The distance threshold from the target (blue circle) is small, hence indicating that the robot gripper has to approach the target. In comparison, the reward function in Fig. 7b encodes a large distance threshold, which rewards every action inside the blue circle. These experiments show that “blindly” optimizing NLL can lead to poor policies. The different solutions that are discovered by BO-IRL can be further analyzed downstream to select an appropriate reward function or to tweak state representations. 5 Conclusion and Future Work This paper describes a Bayesian Optimization approach to reward function learning called BO-IRL. At the heart of BO-IRL is our ρ-projection (and the associated ρ-RBF kernel) that enables efficient exploration of the reward function space by explicitly accounting for policy invariance. Experimental results are promising: BO-IRL uncovers multiple reward functions that are consistent with the expert demonstrations while reducing the number of exact policy optimizations. Moving forward, BO-IRL opens up new research avenues for IRL. For example, we plan to extend BO-IRL to handle higher-dimensional reward function spaces, batch modes, federated learning and nonmyopic settings where recently developed techniques (e.g., [10, 11, 17, 18, 21, 33]) may be applied. 3AIRL and GCL were not tested on the Fetch-Reach task environment as the available code was incompatible with the environment. Broader Impact It is important that our autonomous agents operate with the correct objectives to ensure that they exihibit appropriate and trustworthy behavior (ethically, legally, etc.) [19]. This issue is gaining broader significance as autonomous agents are increasingly deployed in real-world settings, e.g., in the form of autonomous vehicles, intelligent assistants for medical diagnosis, and automated traders. However, specifying objectives is difficult, and as this paper motivates, reward function learning via demonstration likelihood optimization may also lead to inappropriate behavior. For example, our experiments with the Fetch-Reach environment shows that apparently “good” solutions in terms of NLL correspond to poor policies. BO-IRL takes one step towards addressing this issue by providing an efficient algorithm for returning more information about potential reward functions in the form of discovered samples and the GP posterior. This approach can help users further iterate to arrive at appropriate reward function, e.g., to avoid policies that cause expected or undesirable behavior. As with other learning methods, there is a risk for misuse. This work does not consider constraints that limit the reward functions that can be learned. As such, users may teach the robots to perform unethical or illegal actions; consider the recent incident where users taught the Microsoft’s chatbot Tay to spout racist and anti-social tweets. With robots that are capable of physical actions, consequences may be more severe, e.g., bad actors may teach the robot to cause both psychological and physical harm. A more subtle problem is that harmful policies may result unintentionally from misuse of BO-IRL, e.g., when the assumptions of the method do not hold. These issues point to potential future work on verification or techniques to enforce constraints in BO-IRL and other IRL algorithms. Acknowledgments and Disclosure of Funding This research/project is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program, Singapore-MIT Alliance for Research and Technology (SMART) Future Urban Mobility (FM) IRG and the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2019-011). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
1. What is the main contribution of the paper in the field of inverse reinforcement learning? 2. What are the strengths of the proposed algorithm, particularly in terms of exploring the reward function space? 3. What are the weaknesses or concerns regarding the paper, especially regarding the second property of the \rho-projection method? 4. How does the reviewer assess the reproducibility of the experimental results? 5. Are there any suggestions for improving the computational efficiency of the algorithm, especially when dealing with high-dimensional reward parameters? 6. How does the reviewer evaluate the effectiveness of the proposed method in reducing the number of exact policy evaluations compared to state-of-the-art methods?
Summary and Contributions Strengths Weaknesses
Summary and Contributions [UPDATE] I thank the reviewers for their response and for addressing my concerns. I'm still not convinced with the argument in lines 2-6 of the rebuttal. If s_0 is some fixed initial state, and the initial state distribution is p_0(s_0) = 1, then they would need to approximate the partition function Z(theta) in Eq. 6; so it will depend on the MDP (or problem). Thus in general it doesn't avoid the computation of Z(theta)? ==================== This paper proposes a new inverse reinforcement algorithm, named as BO-IRL, by utilizing Bayesian optimization. BO-IRL returns a posterior distribution over the reward functions that are consistent with the demonstrated behavior, and it efficiently explores the reward function space. In particular, they aim to solve the negative log-likelihood objective via Bayesian optimization. First, they illustrate the problems with directly using the standard stationary kernels for IRL. To address these issues, they propose a \rho-projection method that satisfies two “preferred” properties. Finally, they use the standard kernels on the projected space. They have compared their algorithm with baselines on synthetic and real-world environments. Strengths The paper proposes a novel algorithm for the IRL problem with the potential to reduce the number of exact policy evaluation steps. The paper is fairly well written, and illustrative experimental results are provided. Reproducibility: well-documented code is provided. Weaknesses I consider the following as the limitations of/concerns about this work: 1/ Lack of rigorous quantification of the second property of the \rho-projection (Definition 1, Eq. 6): points that are close in its range correspond to reward functions with similar L_IRL. When Eq. 6 satisfies this requirement? I guess that when rho_tau (theta) approaches p_theta (tau), this might be possible. If that is the case, the number of uniform samples M has to be large. Then, the computation of rho-projection becomes expensive (equivalent to computing the partition function). I think more discussion on the value of M is required, and better to report the values used in the experiments as well. 2/ In section 3.3, if the number of expert trajectories K is substantially larger than the dimension size of the reward parameter, does this K projection operation increase the computational complexity? 3/ They have provided a limited set of experiments to claim that the proposed method outperforms the SOTA methods. In section 3.4, they have claimed that BO-IRL can reduce the exact policy evaluations compared to ME-IRL and Deep ME-IRL. This could have been empirically demonstrated in the model-based discrete environments.
NIPS
Title Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization Abstract The problem of inverse reinforcement learning (IRL) is relevant to a variety of tasks including value alignment and robot learning from demonstration. Despite significant algorithmic contributions in recent years, IRL remains an ill-posed problem at its core; multiple reward functions coincide with the observed behavior and the actual reward function is not identifiable without prior knowledge or supplementary information. This paper presents an IRL framework called Bayesian optimization-IRL (BO-IRL) which identifies multiple solutions that are consistent with the expert demonstrations by efficiently exploring the reward function space. BO-IRL achieves this by utilizing Bayesian Optimization along with our newly proposed kernel that (a) projects the parameters of policy invariant reward functions to a single point in a latent space and (b) ensures nearby points in the latent space correspond to reward functions yielding similar likelihoods. This projection allows the use of standard stationary kernels in the latent space to capture the correlations present across the reward function space. Empirical results on synthetic and realworld environments (model-free and model-based) show that BO-IRL discovers multiple reward functions while minimizing the number of expensive exact policy optimizations. 1 Introduction Inverse reinforcement learning (IRL) is the problem of inferring the reward function of a reinforcement learning (RL) agent from its observed behavior [1]. Despite wide-spread application (e.g., [1, 4, 5, 27]), IRL remains a challenging problem. A key difficulty is that IRL is ill-posed; typically, there exist many solutions (reward functions) for which a given behavior is optimal [2, 3, 29] and it is not possible to infer the true reward function from among these alternatives without additional information, such as prior knowledge or more informative demonstrations [9, 15]. Given the ill-posed nature of IRL, we adopt the perspective that an IRL algorithm should characterize the space of solutions rather than output a single answer. Indeed, there is often no one correct solution. Although this approach differs from traditional gradient-based IRL methods [38] and modern deep incarnations that converge to specific solutions in the reward function space (e.g., [12, 14]), it is not entirely unconventional. Previous approaches, notably Bayesian IRL (BIRL) [32], share this view and return a posterior distribution over possible reward functions. However, BIRL and other similar methods [25] are computationally expensive (often due to exact policy optimization steps) or suffer from issues such as overfitting [8]. In this paper, we pursue a novel approach to IRL by using Bayesian optimization (BO) [26] to minimize the negative log-likelihood (NLL) of the expert demonstrations with respect to reward functions. BO is specifically designed for optimizing expensive functions by strategically picking inputs to evaluate and appears to be a natural fit for this task. In addition to the samples procured, the Gaussian process (GP) regression used in BO returns additional information about the discovered 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. reward functions in the form of a GP posterior. Uncertainty estimates of the NLL for each reward function enable downstream analysis and existing methods such as active learning [23] and active teaching [9] can be used to further narrow down these solutions. Given the benefits above, it may appear surprising that BO has not yet been applied to IRL, considering its application to many different domains [35]. A possible reason may be that BO does not work “out-of-the-box” for IRL despite its apparent suitability. Indeed, our initial naïve application of BO to IRL failed to produce good results. Further investigation revealed that standard kernels were unsuitable for representing the covariance structure in the space of reward functions. In particular, they ignore policy invariance [3] where a reward function maintains its optimal policy under certain operations such as linear translation. Leveraging on this insight, we contribute a novel ρ-projection that remedies this problem. Briefly, the ρ-projection maps policy invariant reward functions to a single point in a new representation space where nearby points share similar NLL; Fig. 1 illustrates this key idea on a Gridworld environment.1 With the ρ-projection in hand, standard stationary kernels (such as the popular RBF) can be applied in a straightforward manner. We provide theoretical support for this property and experiments on a variety of environments (both discrete and continuous, with model-based and model-free settings) show that our BO-IRL algorithm (with ρ-projection) efficiently captures the correlation structure of the reward space and outperforms representative state-of-the-art methods. 2 Preliminaries and Background Markov Decision Process (MDP). An MDP is defined by a tupleM : 〈S,A,P,R, γ〉 where S is a finite set of states, A is a finite set of actions, P(s′|s, a) is the conditional probability of next state s′ given current state s and action a, R : S ×A× S → R denotes the reward function, and γ ∈ (0, 1) is the discount factor. An optimal policy π∗ is a policy that maximizes the expected sum of discounted rewards E [ ∑∞ t=0 γ tR(st, at, st+1)|π,M]. The task of finding an optimal policy is referred to as policy optimization. If the MDP is fully known, then policy optimization can be performed via dynamic programming. In model-free settings, RL algorithms such as proximal policy optimization [34] can be used to obtain a policy. Inverse Reinforcement Learning (IRL). Often, it is difficult to manually specify or engineer a reward function. Instead, it may be beneficial to learn it from experts. The problem of inferring the unknown reward function from a set of (near) optimal demonstrations is known as IRL. The learner is 1This Gridworld environment will be our running example throughout this paper. provided with an MDP without a reward function,M\R, and a set T , {τi}Ni=1 of N trajectories. Each trajectory τ , {(st, at)}L−1t=0 is of length L. Similar to prior work, we assume that the reward function can be represented by a real vector θ ∈ Θ ⊆ Rd and is denoted by Rθ(s, a, s′). Overloading our notation, we denote the discounted reward of a trajectory τ as Rθ(τ) , ∑L−1 t=0 γ tRθ(st, at, st+1). In the maximum entropy framework [38], the probability pθ(τ) of a given trajectory is related to its discounted reward as follows: pθ(τ) = exp(Rθ(τ))/Z(θ) (1) where Z(θ) is the partition function that is intractable in most practical scenarios. The optimal parameter θ∗ is given by argminθ LIRL(θ) where LIRL(θ) , − ∑ τ∈T L−2∑ t=0 [log(π∗θ(st, at)) + log(P(st+1|st, at))] (2) is the negative log-likelihood (NLL) and π∗θ is the optimal policy computed using Rθ. 3 Bayesian Optimization-Inverse Reinforcement Learning (BO-IRL) Recall that IRL algorithms take as input an MDPM\R, a space Θ of reward function parameters, and a set T of N expert demonstrations. We follow the maximum entropy framework where the optimal parameter θ∗ is given by argminθ LIRL(θ) and LIRL(θ) takes the form shown in (2). Unfortunately, calculating π∗θ in (2) is expensive, which renders exhaustive exploration of the reward function space infeasible. To mitigate this expense, we propose to leverage Bayesian optimization (BO) [26]. Bayesian optimization is a general sequential strategy for finding a global optimum of an expensive black-box function f : X → R defined on some bounded set X ∈ Rd. In each iteration t = 1, . . . , T , an input query xt ∈ X is selected to evaluate the value of f yielding a noisy output yt , f(xt) + where ∼ N (0, σ2) is i.i.d. Gaussian noise with variance σ2. Since evaluation of f is expensive, a surrogate model is used to strategically select input queries to approach the global minimizer x∗ = argminx∈X f(x). The candidate xt is typically found by maximizing an acquisition function. In this work, we use a Gaussian process (GP) [36] as the surrogate model and expected improvement (EI) [26] as our acquisition function. Gaussian process (GP). A GP is a collection of random variables {f(x)}x∈X where every finite subset follows a multivariate Gaussian distribution. A GP is fully specified by its prior mean µ(x) and covariance k(x,x′) for all x,x′ ∈ X . In typical settings, µ(x) is often set to zero and the kernel function k(x,x′) is the primary ingredient. Given a column vector yT , [yt] > t=1..T of noisy observations of f at inputs x1, . . . ,xT obtained after T evaluations, a GP permits efficient computation of its posterior for any input x. The GP posterior is a Gaussian with posterior mean and variance µT (x) , kT (x) > + (KT + σ 2I)−1yT σ2T (x) , k(x,x)− kT (x) >(KT + σ 2I)−1kT (x) (3) where K , [k(xt,xt′)]t,t′=1,...,T is the kernel matrix and k(x) , [k(xt,x)] > t=1,...,T is the vector of cross-covariances between x and xt. Expected Improvement (EI). EI attempts to find a new candidate input xt at iteration t that maximizes the expected improvement over the best value seen thus far. Given the current GP posterior and xbest , argmaxx∈{x1,...,xt−1} f(x), the next xt is found by maximizing aEI(x) , σt−1(x)[γt−1(x)Φ(γt−1(x)) +N (γt−1(x); 0, 1)] (4) where Φ(x) is the cumulative distribution function of the standard Gaussian and γt(x) , (f(xbest − µt(x))/σt(x) is a Z-score. Specializing BO for IRL. To apply BO to IRL, we set the function f to be the IRL loss, i.e., f(θ) = LIRL(θ), and specify the kernel function k(θ,θ′) in the GP. The latter is a crucial choice; since the kernel encodes the prior covariance structure across the reward parameter space, its specification can have a dramatic impact on search performance. Unfortunately, as we will demonstrate, popular stationary kernels are generally unsuitable for IRL. The remainder of this section details this issue and how we can remedy it via a specially-designed projection. 3.1 Limitations of Standard Stationary Kernels: An Illustrative Example As a first attempt to optimize LIRL using BO, one may opt to parameterize the GP surrogate function with standard stationary kernels, which are functions of θ−θ′. For example, the radial basis function (RBF) kernel is given by kRBF(θ,θ ′) = exp(−‖θ − θ′‖2/2l2) (5) where the lengthscale l captures how far one can reliably extrapolate from a given data point. While simple and popular, the RBF is a poor choice for capturing covariance structure in the reward parameter space. To elaborate, the RBF kernel encodes the notion that reward parameters which are closer together (in terms of squared Euclidean distance) have similar LIRL values. However, this structure does not generally hold true in an IRL setting due to policy invariance; in our Gridworld example, LIRL(θa) is the same as LIRL(θb) despite θa and θb being far apart (see Fig. 1b). Indeed, Fig. 2b illustrates that applying BO with the RBF kernel yields a poor GP posterior approximation to the true NLLs. The same effect can be seen for the Matérn kernel in Fig. 2c. 3.2 Addressing Policy Invariance with the ρ-Projection The key insight of this work is that better exploration can be achieved via an alternative representation of reward functions that mitigates policy invariance associated with IRL [3]. Specifically, we develop the ρ-projection whose key properties are that (a) policy invariant reward functions are mapped to a single point and (b) points that are close in its range correspond to reward functions with similar LIRL. Effectively, the ρ-projection maps reward function parameters into a space where standard stationary kernels are able to capture the covariance between reward functions. For expositional simplicity, let us first consider the special case where we have only one expert demonstration. Definition 1 Consider an MDPM with reward Rθ and a single expert trajectory τ . Let F(τ) be a set of M uniformly sampled trajectories fromM with the same starting state and length as τ . Define the ρ-projection ρτ : Θ→ R as ρτ (θ) , pθ(τ) pθ(τ) + ∑ τ ′∈F(τ) pθ(τ ′) = exp(Rθ(τ)/Z(θ)) exp(Rθ(τ)/Z(θ)) + ∑ τ ′∈F(τ) exp(Rθ(τ ′)/Z(θ)) = exp(Rθ(τ)) exp(Rθ(τ)) + ∑ τ ′∈F(τ) exp(Rθ(τ ′)) . (6) The first equality in (6) is a direct consequence of the assumption that the distribution of trajectories in MDPM follows (1) from the maximum entropy IRL framework. It can be seen from the second equality in (6) that an appealing property of ρ-projection is that the partition function is canceled off from the numerator and denominator, thereby eliminating the need to approximate it. Note that the ρ-projection is not an approximation of p(τ) despite the similar forms. F(τ) in the denominator of ρ-projection is sampled to have the same starting point and length as τ ; as such, it may not cover the space of all trajectories and hence does not approximate Z(θ) even with large M . We will discuss below how the ρ-projection achieves the aforementioned properties. Policy invariance can occur due to multiple causes and we begin our discussion with a common class of policy invariant reward functions, namely, those resulting from potential-based reward shaping (PBRS) [28]. ρ-Projection of PBRS-Based Policy Invariant Reward Functions. Reward shaping is a method used to augment the reward function with additional information (referred to as a shaping function) without changing its optimal policy [24]. Designing a reward shaping function can be thought of as the inverse problem of identifying the underlying cause of policy invariance. Potential-based reward shaping (PBRS) [28] is a popular shaping function that provides theoretical guarantees for single-objective single-agent domains. We summarize the main theoretical result from [28] below: Theorem 1 Consider an MDPM0 : 〈S,A, T, γ,R0〉. We define PBRS F : S ×A× S → R to be a function of the form F (s, a, s′) , γφ(s′)− φ(s) where φ(s) is any function of the form φ : S → R. Then, for all s, s′ ∈ S and a ∈ A, the following transformation fromR0 toR is sufficient to guarantee that every optimal policy inM0 is also optimal in MDPM : 〈S,A, T, γ,R〉: R(s, a, s′) , R0(s, a, s ′) + F (s, a, s′) = R0(s, a, s ′) + γφ(s′)− φ(s) . (7) Remark 1 The work of [28] has proven Theorem 1 for the special case of deterministic policies. However, this theoretical result also holds for stochastic policies, as shown in Appendix A. Corollary 1 Given a reward function R(s, a, s′), any reward function R̂(s, a, s′) , R(s, a, s) + c is policy invariant to R(s, a, s′) where c is a constant. This is a special case of PBRS where φ(s) is a constant. The following theorem states that ρ-projection maps reward functions that are shaped using PBRS to a single point given sufficiently long trajectories: Theorem 2 Let Rθ and Rθ̂ be reward functions that are policy invariant under the definition in Theorem 1. Then, w.l.o.g., for a given expert trajectory τ with length L, limL→∞ ρτ (θ̂) = ρτ (θ) . (8) Its proof is in Appendix B. In brief, when summing up F (s, a, s′) (from Theorem 1) across the states and actions in a trajectory, most terms cancel out leaving only two terms: (a) φ(s0) which depends on the start state s0 and (b) γLφ(sL) which depends on the end state sL. With a sufficiently large L, the second term reaches zero. Our definition of ρτ (θ) assumes that s0 is the same for all trajectories. As a result, the influence of these two terms and by extension, the influence of the reward shaping function is removed by the ρ-projection. Corollary 2 ρτ (θ̂) = ρτ (θ) if (a) Rθ and Rθ̂ are only state dependent or (b) all τ ′ ∈ F(τ) have the same end state as τ in addition to the same starting state and same length. Its proof is in Appendix C. ρ-Projection of Other Classes of Policy Invariance. There may exist other classes of policy invariant reward functions for a given IRL problem. How does the ρ-projection handle these policy invariant reward functions? We argue that ρ-projection indeed maps all policy invariant reward functions (regardless of their function class) to a single point if (1) holds true. Definition 1 casts the ρ-projection as a function of the likelihood of given (fixed) trajectories. Hence, the ρ-projection is identical for reward functions that are policy invariant since the likelihood of a fixed set of trajectories is the same for such reward functions. The ρ-projection can also be interpreted as a ranking function between the expert demonstrations and uniformly sampled trajectories, as shown in [8]. A high ρ-projection implies a higher preference for expert trajectories over uniformly sampled trajectories with this relative preference decreasing with lower ρ-projection. This ensures that reward functions with similar likelihoods are mapped to nearby points. 3.3 ρ-RBF: Using the ρ-Projection in BO-IRL For simplicity, we have restricted the above discussion to a single expert trajectory τ . In practice, we typically have access to K expert trajectories and can project θ to a K-dimensional vector [ρτk(θ)] K k=1. The similarity of two reward functions can now be assessed by the Euclidean distance between their projected points. In this work, we use a simple RBF kernel after the ρ-projection, which results in the ρ-RBF kernel; other kernels can also be used. Algorithm 2 in Appendix E describes in detail the computations required by the ρ-RBF kernel. With the ρ-RBF kernel, BO-IRL follows standard BO practices with EI as an acquisition function (see Algorithm 1 in Appendix E). BO-IRL can be applied to both discrete and continuous environments, as well as model-based and model-free settings. Fig. 3 illustrates the ρ-projection “in-action” using the Gridworld example. Recall the reward function in this environment is parameterized by θ = {θ0, θ1, θ2}. By varying θ2 (translation) while keeping {θ0, θ1} constant, we generate reward functions that are policy invariant, as per Corollary 1. The yellow stars are two such policy invariant reward functions (with fixed {θ0, θ1} and two different values of θ2) that share identical LIRL (i.e., indicated by color). Fig. 3c shows a PCA-reduced representation of the 20-dimensional ρ-space (i.e., the range of the ρ-projection). These two reward parameters are mapped to a single point. Furthermore, reward parameters that are similar in likelihood (red, blue, and yellow stars) are mapped close to one other. Using the ρ-RBF in BO yields a better posterior and samples, as illustrated in Fig. 2d. 3.4 Related Work Our approach builds upon the methods and tools developed to address IRL, in particular, maximum entropy IRL (ME-IRL) [38]. However, compared to ME-IRL and its deep learning variant: maximum entropy deep IRL (deep ME-IRL) [37], our BO-based approach can reduce the number of (expensive) exact policy evaluations via better exploration. Newer approaches such as guided cost learning (GCL) [12] and adversarial IRL (AIRL) [14] avoid exact policy optimization by approximating the policy using a neural network that is learned along with the reward function. However, the quality of the solution obtained depends on the heuristics used and similar to ME-IRL: These methods return a single solution. In contrast, BO-IRL returns the best-seen reward function (possibly a set) along with the GP posterior which models LIRL. A related approach is Bayesian IRL (BIRL) [32] which incorporates prior information and returns a posterior over reward functions. However, BIRL attempts to obtain the entire posterior and utilizes a random policy walk, which is inefficient. In contrast, BO-IRL focuses on regions with high likelihood. GP-IRL [20] utilizes a GP as the reward function, while we use a GP as a surrogate for LIRL. Compatible reward IRL (CR-IRL) [25] can also retrieve multiple reward functions that are consistent with the policy learned from the demonstrations using behavioral cloning. However, since demonstrations are rarely exhaustive, behavioral cloning can overfit, thus leading to an incorrect policy. Recent work has applied adversarial learning to derive policies, specifically, by generative adversarial imitation learning (GAIL) [16]. However, GAIL directly learns the expert’s policy (rather the a reward function) and is not directly comparable to BO-IRL. 4 Experiments and Discussion In this section, we report on experiments designed to answer two primary questions: Q1 Does BO-IRL with ρ-RBF uncover multiple reward functions consistent with the demonstrations? Q2 Is BO-IRL able to find good solutions compared to other IRL methods while reducing the number of policy optimizations required? Due to space constraints, we focus on the key results obtained. Additional results and plots are available in Appendix F. Setup and Evaluation. Our experiments were conducted using the four environments shown in Fig. 4: two model-based discrete environments, Gridworld and Börlange road network [13], and two model-free continuous environments, Point Mass Maze [14] and Fetch-Reach [31]. Evaluation for the Fetch-Reach task environment was performed by comparing the success rate of the optimal policy πθ̂ obtained from the learned reward θ̂. For the other environments, we have computed the expected sum of rewards (ESOR) which is the average ground truth reward that an agent receives while traversing a trajectory sampled using πθ̂. For BO-IRL, the best-seen reward function is used for the ESOR calculation. More details about the experimental setup is available in Appendix D. BO-IRL Recovers Multiple Regions of High Likelihood. To answer Q1, we examine the GP posteriors learned by BO-IRL (with ρ-RBF kernel) and compare them against Bayesian IRL (BIRL) with uniform prior [32]. BIRL learns a posterior distribution over reward functions, which can also be used to identify regions with high-probability reward functions. Figs. 5a and 5c show that BIRL assigns high probability to reward functions adjacent to the ground truth but ignores other equally probable regions. In contrast, BOIRL has identified multiple regions of high likelihood, as shown in Figs. 5b and 5d. Interestingly, BO-IRL has managed to identify multiple reward functions with lower NLL than the expert’s true reward (as shown by red crosses) in both environments. For instance, the linear “bands” of low NLL values at the bottom of Fig. 5d indicate that the travel patterns of the expert agent in the Börlange road network can be explained by any reward function that correctly trades off the time needed to traverse a road segment with the number of left turns encountered; left-turns incur additional time penalty due to traffic stops. Figs. 6a and 6b show the GP posterior learned by BO-IRL for the two continuous environments. The Fetch-Reach task environment has a discontinuous reward function of the distance threshold and penalty. As seen in Fig. 6a, the reward function space in the Fetch-Reach task environment has multiple disjoint regions of high likelihood, hence making it difficult for traditional IRL algorithms to converge to the true solution. Similarly, multiple regions of high likelihood are also observed in the Point Mass Maze setting (Fig. 6b). BO-IRL Performs Well with Fewer Iterations Relative to Existing Methods. In this section, we describe experimental results related to Q2, i.e., whether BO-IRL is able to find high-quality solutions within a given budget, as compared to other representative state-of-the-art approaches. We compare BO-IRL against BIRL, guided cost learning (GCL) [12] and adversarial IRL (AIRL) [14]. As explained in Appendix D.5, deep ME-IRL [37] has failed to give meaningful results across all the settings and is hence not reported. Note that GCL and AIRL do not use explicit policy evaluations and hence take less computation time. However, they only return a single reward function. As such, they are not directly comparable to BO-IRL, but serve to illustrate the quality of solutions obtained using recent approximate single-reward methods. BO-IRL with RBF and Matérn kernels do not have the overhead of calculating the projection function and therefore has a faster computation time. However, as seen from Fig. 2, these kernels fail to correctly characterize the reward function space correctly. We ran BO-IRL with the RBF, Matérn, and ρ-RBF kernels. Table 1 summarizes the results for Gridworld environment, Börlange road network, and Point Mass Maze. Since no ground truth reward is available for the Börlange road network, we used the reward function in [13] and generated artificial trajectories.2 BO-IRL with ρ-RBF reached expert’s ESOR with fewer iterations than the other tested algorithms across all the settings. BIRL has a higher success rate in Gridworld environment compared to our method; however, it requires a significantly higher number of iterations with each iteration involving expensive exact policy optimization. It is also worth noting that AIRL and GCL are unable to exploit the transition dynamics of the Gridworld environment and Börlange road network settings. This in turn results in unnecessary querying of the environment for additional trajectories to approximate the policy function. BO-IRL is flexible to handle both model-free and model-based environments by an appropriate selection of the policy optimization method. 2BO-IRL was also tested on the real-world trajectories from the Börlange road network dataset; see Fig. 11 in Appendix F.4. Fig. 7c shows that policies obtained from rewards learned using ρ-RBF achieve higher success rates compared to other kernels in the Fetch-Reach task environment.3 Interestingly, the success rate falls in later iterations due to the discovery of reward functions that are consistent with the demonstrations but do not align with the actual goal of the task. For instance, the NLL for Fig. 7b is less than that for Fig. 7a. However, the intention behind this task is clearly better captured by the reward function in Fig. 7a: The distance threshold from the target (blue circle) is small, hence indicating that the robot gripper has to approach the target. In comparison, the reward function in Fig. 7b encodes a large distance threshold, which rewards every action inside the blue circle. These experiments show that “blindly” optimizing NLL can lead to poor policies. The different solutions that are discovered by BO-IRL can be further analyzed downstream to select an appropriate reward function or to tweak state representations. 5 Conclusion and Future Work This paper describes a Bayesian Optimization approach to reward function learning called BO-IRL. At the heart of BO-IRL is our ρ-projection (and the associated ρ-RBF kernel) that enables efficient exploration of the reward function space by explicitly accounting for policy invariance. Experimental results are promising: BO-IRL uncovers multiple reward functions that are consistent with the expert demonstrations while reducing the number of exact policy optimizations. Moving forward, BO-IRL opens up new research avenues for IRL. For example, we plan to extend BO-IRL to handle higher-dimensional reward function spaces, batch modes, federated learning and nonmyopic settings where recently developed techniques (e.g., [10, 11, 17, 18, 21, 33]) may be applied. 3AIRL and GCL were not tested on the Fetch-Reach task environment as the available code was incompatible with the environment. Broader Impact It is important that our autonomous agents operate with the correct objectives to ensure that they exihibit appropriate and trustworthy behavior (ethically, legally, etc.) [19]. This issue is gaining broader significance as autonomous agents are increasingly deployed in real-world settings, e.g., in the form of autonomous vehicles, intelligent assistants for medical diagnosis, and automated traders. However, specifying objectives is difficult, and as this paper motivates, reward function learning via demonstration likelihood optimization may also lead to inappropriate behavior. For example, our experiments with the Fetch-Reach environment shows that apparently “good” solutions in terms of NLL correspond to poor policies. BO-IRL takes one step towards addressing this issue by providing an efficient algorithm for returning more information about potential reward functions in the form of discovered samples and the GP posterior. This approach can help users further iterate to arrive at appropriate reward function, e.g., to avoid policies that cause expected or undesirable behavior. As with other learning methods, there is a risk for misuse. This work does not consider constraints that limit the reward functions that can be learned. As such, users may teach the robots to perform unethical or illegal actions; consider the recent incident where users taught the Microsoft’s chatbot Tay to spout racist and anti-social tweets. With robots that are capable of physical actions, consequences may be more severe, e.g., bad actors may teach the robot to cause both psychological and physical harm. A more subtle problem is that harmful policies may result unintentionally from misuse of BO-IRL, e.g., when the assumptions of the method do not hold. These issues point to potential future work on verification or techniques to enforce constraints in BO-IRL and other IRL algorithms. Acknowledgments and Disclosure of Funding This research/project is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) program, Singapore-MIT Alliance for Research and Technology (SMART) Future Urban Mobility (FM) IRG and the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-RP-2019-011). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
1. What is the focus and contribution of the paper regarding Gaussian process Bayesian optimization? 2. What are the strengths of the proposed method, particularly in its theoretical grounding and empirical evaluation? 3. Do you have any concerns or questions regarding the paper's content, such as the definition of p(tau) or the absence of GAIL in comparisons? 4. How does the reviewer assess the significance and novelty of the contribution, and how do they think it addresses the issue of exploration in reward parameters? 5. Are there any limitations or weaknesses in the paper that need to be addressed?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes rho-projection, which is an alternative representation of a reward function in order to make a Gaussian process Bayesian optimization available on a reward function space. This paper addresses the issue of the exploration of reward parameters. This paper applies a BO technique on newly designed feature space. The proposed method is tested on three simulation environemnts and shows similar perofrmance to other compared methods in two environments and outperforms other methods in one environment. _____________________________________________________ While the response addresses some of my questions, however, there still exist unclear parts. In particular, the definition of p(tau) is still unclear. I know p(tau) is deterministic in each run and that is because tau is determined. Then, next question is whether tau is a random variable or not. To me, it is a very weird setting that tau is deterministic since most IRL methods assume that the expert shows stochastic behavior, thus, tau is usually random variables. I update my score after reading response. Strengths 1. theoretical grounding: weak This paper provides some theorems but, to me, all theorems are not novel. First, Corollary 1 is trivial since adding some consant value cannot change the optimal policy. Thus, Theorem 1 is not essential to prove Corollary 1. Furthermore, Theorem 1 is not novel (See Lemma 1 in [1]). Second, I think Theorem 2 is trivial. Theorem 2 has no theoretical contribution. Furthermore, Definition 1 is ill-posed. In Definition 1, F(\tau) indicates a set of uniformly sampled trajectories. Then, it means F(\tau) is a random variable? Then, rho is a random variable, too? I think using the expectation is more reasonable. Furthermore, this paper is not the first paper to define the concept of rho [2, 3]. 2. empirical evaluation: boderline Simulations show promising results of the proposed method. However, I have the following questions. 1) Why is GAIL [4] not compared? GAIL [4] is a baseline algorithm and should be compared. 2) Why do all algorithms show poor performances on the point mass maze problem? It seems the easiest one among all simulations. 3. significance and novelty of the contribution: strong I totally agree with the motiviation of this paper and the approach is quite novel. But, theoretical results and experimental results are weak to verify whether the idea is really working or not. [1] Achiam, J., Held, D., Tamar, A., & Abbeel, P. (2017, August). Constrained policy optimization. In Proceedings of the 34th International Conference on Machine Learning, 2017, (pp. 22-31). [2] Ziebart, B. D., Maas, A. L., Bagnell, J. A., & Dey, A. K. (2008, July). Maximum entropy inverse reinforcement learning. In Aaai (Vol. 8, pp. 1433-1438). [3] Ziebart, Brian D. "Modeling purposeful adaptive behavior with the principle of maximum causal entropy." (2010). [4] Ho, Jonathan, and Stefano Ermon. "Generative adversarial imitation learning." Advances in neural information processing systems. 2016. Weaknesses See strenghts
NIPS
Title Contrastive Graph Structure Learning via Information Bottleneck for Recommendation Abstract Graph convolution networks (GCNs) for recommendations have emerged as an important research topic due to their ability to exploit higher-order neighbors. Despite their success, most of them suffer from the popularity bias brought by a small number of active users and popular items. Also, a real-world user-item bipartite graph contains many noisy interactions, which may hamper the sensitive GCNs. Graph contrastive learning show promising performance for solving the above challenges in recommender systems. Most existing works typically perform graph augmentation to create multiple views of the original graph by randomly dropping edges/nodes or relying on predefined rules, and these augmented views always serve as an auxiliary task by maximizing their correspondence. However, we argue that the graph structures generated from these vanilla approaches may be suboptimal, and maximizing their correspondence will force the representation to capture information irrelevant for the recommendation task. Here, we propose a Contrastive Graph Structure Learning via Information Bottleneck (CGI) for recommendation, which adaptively learns whether to drop an edge or node to obtain optimized graph structures in an end-to-end manner. Moreover, we innovatively introduce the Information Bottleneck into the contrastive learning process to avoid capturing irrelevant information among different views and help enrich the final representation for recommendation. Extensive experiments on public datasets are provided to show that our model significantly outperforms strong baselines. 2 1 Introduction Recommender systems have been widely deployed to alleviate information overload in diverse scenarios including e-commerce, online news and multimedia contents, which requires high-quality user and item representations learned from the historical interactions [7, 14, 43]. Recently, thanks to the powerful capability in modeling graph-structured data, Graph Convolution Networks (GCNs) provide an efficient way to integrate multi-hop neighbors into node representation learning and show prominent performance in recommendation [37, 30, 8]. Although encouraging performance has been achieved, we argue that most GCN-based recommender models suffer from the following two limitations, of which the impacts on the user’s exhibited preference are presented in Fig. 1. i) Popularity Bias. Items inherently have different customer sizes, and this imbalance can potentially lead to popularity bias [45]. In most recommender systems, the customer size for items usually follows a long-tail distribution, which means a few items have massive customers while the majority have few customers. Similarly, most users have few interactions. This skewed data distribution will bias GCN-based models towards the popular users and items easily ∗Equal contributions from both authors. This work is done when Chunyu Wei works as an intern at Alibaba. 2The code is available on https://github.com/weicy15/CGI. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). during multi-hop convolution, which may hamper the representation learning. ii) Interaction Noises. User-item interactions usually contain noises especially in the scenarios with only implicit feedbacks (e.g., clicks and purchases). More specifically, these noisy edges in the bipartite graph are not necessarily aligned with user preferences [18], since it’s common that the user clicks something by mistake or finds something boring after purchasing. GCN-based models are known to be vulnerable to the quality of the input graphs [44], which means aggregating misleading neighborhood information is likely to lead to sub-optimal performance. Recent advances in graph contrastive learning [27, 38] have identified an effective training scheme for mitigating popularity bias and increasing robustness for noise on graph-based tasks, which inspire many studies [31, 41, 33] to introduce this training scheme to enhance representation learning for recommendations. Nevertheless, existing studies have two limitations. First, most methods perform data augmentation by randomly dropping edges/nodes to change the graph structure [31], shuffling the embeddings to corrupt the node representations [41], or relying on predefined rules [6]. However, within unsupervised settings, structures created from these vanilla approaches may be suboptimal for recommendation tasks and also lack persuasive rationales for why the randomly dropped edges/nodes alleviate the popularity bias and interaction noises. Like the obtained representation No.1 in Fig. 1, structures created from these vanilla approaches may deviate from the optimal area. Second, most methods generate multiple views only to serve as an auxiliary task by maximizing the agreement of node representations among these views, which may force the user or item representation in different views to capture the information irrelevant for the recommendation task. For example, the obtained representation No.1 in Fig. 1 contains much information irrelevant to the real preference. So we believe that a good augmentation (e.g., No.2 in Fig. 1) should cover as much optimal area as possible while being as small as possible to reduce useless information. To address the aforementioned limitations, we propose Contrastive Graph Structure Learning via Information Bottleneck (CGI) for recommendation, which contains two key components: learnable graph augmentation and information bottleneck contrastive learning. First, we propose learnable graph augmentation to learn whether to drop an edge or node to transform the original bipartite graph into correlated views, which will be jointly optimized with the downstream recommendation in an end-to-end fashion. As a result, these generated views can intentionally reduce the influence of popular nodes while preserving information of the isolated nodes, and thus help to mitigate the popularity bias. The intuition behind is that random dropout will indiscriminately drop nodes or edges regardless of the corresponding node degrees, while by message passing mechanism, GCNs are easier to reconstruct the missing information of popular users or items, but much harder to reconstruct those isolated nodes with few connections, thus may overemphasize those high-degree nodes. These generated views with debiased information are all fed into the GCN-based recommender for multi-view representation learning to increase the ability against popularity bias. Second, we proposed to integrate different views into a compact representation for the downstream recommendation tasks, which can further improve the robustness of the model. Generally, when information from different views complements each other, it can be expected that the multi-view representation learning approaches can improve downstream performance [28]. So we argue that simply maximizing the mutual information in the conventional graph contrastive learning may push the representations of different views to capture information irrelevant to the downstream task. Inspired by the recent advances of Information Bottleneck (IB) [32], which encourages the representation to capture the minimum sufficient information for the downstream task, we utilize the IB principle to minimize the mutual information between the original graph and the generated views while maintaining the downstream recommendation performance of each view. By doing so, the learnable graph augmenters can learn to remove noisy interactions in the original graph as much as possible, since these interactions are of no help for the downstream recommendation. Also, the IB principle helps representations of different views to capture collaborative information of different semantics complement to each other. The contributions of this paper are summarized as follows. (1) We propose the CGI to construct optimized graph structures by dropping nodes and edges adaptively for the multi-view representation learning of users and items, which provides rationales for alleviating the popularity bias. (2) To efficiently drop information irrelevant to the downstream recommendation, we innovatively integrate information bottleneck into the multi-view contrastive learning process for recommendation and prove that it can better mitigate interaction noises. (3) Experimental results show that our method outperforms the state-of-the-art methods on three benchmark datasets from different domains. 2 Related Work Graph-based Recommendation Early works exploiting the user-item bipartite graph for recommendation like ItemRank [3] usually followed the label propagation mechanism to propagate users’ preference over the graph, i.e., encouraging connected nodes to have similar labels. In recent years, Graph Convolution Networks (GCNs) have made great progress in representation learning tasks including node classification and link prediction [5, 12, 35]. Motivated by the strength of GCNs, several works [24, 8, 37, 30] have adapted GCNs on the user-item bipartite graph to learn more robust latent representations for users and items in recommender systems. Contrastive Learning Contrastive Learning (CL) [22, 9] was firstly proposed to train CNNs for image representation learning. Graph Contrastive Learning (GCL) applies the idea of CL on GNNs. DGI [27] and InfoGraph [19] learn node representations according to the mutual information between nodes and the whole graph. Peng et al. [15] developed an unsupervised learning model trained by maximizing mutual information of nodes between the input and output of a graph neural encoder. Hu et al. [10] extend the idea to build contrastive pairs between nodes and subgraphs. In addition, GCC [16] designs the pre-training task as subgraph instance discrimination in and across networks and leverage CL to empower GNNs. And a very recent work SGL [31] supplements the classical supervised task of recommendation with an auxiliary graph CL task, which generates multiple views of a node and maximizes the agreement between different views. However, it differs from our work in: (1) SGL [31] generates contrastive pairs by randomly dropping edges/nodes, while our work adopts a learnable augmenter to optimize the generated views. (2) SGL [31] utilizes conventional CL as an auxiliary task by maximizing the agreement of augmentation views, while we propose to encourage the differences between the augmentation views and the original graph. Learning by Information-Bottleneck Information Bottleneck (IB) [23] is an approach based on information theory, which states that if the obtained representation discards information from the input which is not useful for a given task, it will increase robustness for the downstream tasks. Besides, the information bottleneck principle is used in multi-view representation learning [34, 29, 2]. Formally, given the original data X with label Y, IB is to obtain a compact and effective representation Z of X. And the objective of the IB principle is as follows: max Z I(Y;Z)− βI(X;Z), (1) where β is the coefficient to balance the mutual information I(Y,Z) and I(X,Z). Recently, some works proposed to integrate the IB principle into the graph learning process. You et al. [39] propose a variational graph auto-encoder to generate contrastive views and the downstream contrastive learning utilizes IB performing on graph representations as the unsupervised loss. Both Yu et al. [40] and Yu et al. [42] aim to directly reveal the vital substructure in the subgraph level, among which [1] learns a node assignment matrix to extract the subgraph, and implements the IB of two graphs by estimating the KL-divergence from graph latent representation with a statistic network (DONSKER-VARADHAN Representation of KL-divergence). And Yu et al. [42] employ noise injection to manipulate the graph, and customizes the Gaussian prior for each input graph and the injected noise, so as to implement the IB of two graphs with a tractable variational upper bound. Our CGI differs from them, since we do not directly aim to find an optimal graph structure, instead we try to learn the graph structure complementing the original one. Then by integrating different views into a compact representation, we obtain the optimal node representation for the downstream task. Sun et al. [20] learn to mask node feature and generates new structure with the masked feature. Afterward, [20] adopt GNN to learn the distribution of graph representation and utilize the KL-divergence between the learned distribution and the prior distribution to implement the IB. All these methods aim to find a better structure or representation to replace the original graph for the downstream task, while our CGI follows a multi-view representation learning schema. IB is utilized to minimize the mutual information between the original graph and the generated views while maintaining the downstream recommendation performance of each view. Besides the noise-invariance property, IB helps representations of different views to capture collaborative information of different semantics that complement each other. AD-GCL [21] shares some ideas with our CGI but there are fundamental differences. Specifically, AD-GCL focuses on training self-supervised GNNs for graph-level tasks. In contrast, CGI aims to mitigate the popularity bias and interaction noises of node-level collaborative filtering (CF). In addition, AD-GCL adopts an adversarial strategy aiming to maximize the agreement of final representations of different views. Instead, our CGI minimizes the mutual information of different views to capture collaborative information of different semantics. To the best of our knowledge, this is the first study on leveraging the IB principle to enhance graph-based recommendations. 3 Preliminaries Problem Definition. Let U = {u1, u2, . . . , um} denotes the set of users, and let I = {i1, i2, . . . , in} denotes the set of items. We typically use a binary matrix R ∈ Rm×n to store user-item interactions (e.g., purchases and clicks), where rui = 1 indicates that user u consumed item i while rui = 0 means that item i is unexposed to user u or user u is not interested in item i. Following most existing works [30, 8], we represent interaction data as a user-item bipartite graph G = {V, E}, where the node set V = U ∪ I and the edge set E = {eui|rui = 1, u ∈ U , i ∈ I}. The adjacency matrix AG can be formulated as follows: AG = [ 0 R RT 0 ] . (2) With respect to the adjacency matrix AG , the degree matrix DG ∈ N(m+n)×(m+n) is a diagonal matrix, in which each entry DG [i, i] denotes the number of nonzero entries in the i-th row of AG . GCN Paradigm. The core of graph convolution on graph G is to update the ego node by aggregating the representations of its neighbor nodes, which can be formulated as follows: E(l) = GCN(E(l−1),G), (3) where E(l−1) is the current representations of nodes and E(l) is the updated representations after the graph convolution layer. E(0) is the initial inputs, which are usually the ID embeddings (trainable parameters). From the vector level, Eq. 3 can be interpreted as: e(l)u = f (l) combine(e (l−1) u , f (l) aggregate({e (l) i |i ∈ Nu})), (4) e (l) i = f (l) combine(e (l−1) i , f (l) aggregate({e(l)u |u ∈ Ni})), (5) where Nu and Ni are the neighbor node set of user u and item i, respectively. There are many works designing different fcombine and faggregate [5, 26, 35]. Usually, there will be readout function to generate the final representations for the recommendation task: e = freadout({e(l)|l = 0, 1, . . . , L}). (6) For example, freadout can be concatenation [30], weighted sum [8] and retaining the last output [24]. LightGCN Brief. In this paper, we implement our CGI on the simple but effective GCN-based recommendation model LightGCN. It adopts weighted sum aggregators and abandon the use of feature transformation and nonlinear activation, of which the matrix form can be formulated as: E(l) = (D − 12 G AGD − 12 G )E (l−1), l ∈ N+, (7) where E(l−1) = [E(l−1)u ,E (l−1) i ] is the output of the previous LightGCN layer or the initial E (0). At last, LightGCN implement the freadout by weighted sum, in which the weight of each layer is set as 1 L+1 following the original work. After obtaining the representations of users and items, the inner product r̂ui = eTuei is used to predict preference score, which is commonly adopted in most recommender system: LightGCN employ the Bayesian Personalized Ranking (BPR) loss [17] to optimize the model parameters: Lrec =∑ (u,i,j)∈O −lnσ(r̂ui − r̂uj), where O = {(u, i, j)|(u, i) ∈ R+, (u, j) ∈ R−} is the pairwise training data, in which R+ denotes the observed interactions, and R− denotes the unobserved interactions. In this work, we also choose it as the objective function for the recommendation task. 4 Methodology The framework of CGI is illustrated in Fig. 2 and we detail the inference in Appendix. 4.1 Learnable Multi-View Augmentation Most of GCN-based recommendation like LightGCN [8] fully relies on the adjacency matrix AG to refine the representations of users and items in Eq. 7. However, AG may contain many biased and noisy information as discussed in Sec. 1, which continue to propagate misleading information as the LightGCN goes deeper. On the other hand, the vanilla randomly dropout in most contrastive learning for recommendation cannot create powerful views to alleviate popularity bias and interaction noises. We hence utilize parameterized networks to generate the layer-wise optimized augmentation views. Specifically, we assign different graph convolution layers with different learned subgraphs coupled with the downstream recommendation and thus obtain multi-view user and item representations. We elaborate on two types of learnable augmentations as follows. Node-Dropping View As illustrated in Sect. 1, popular users or items in the graph may skew the data distribution and thus hinder the GCN-based recommender. So we perform learnable node dropping at each layer to mask those the influential nodes and create the Node-Dropping view, which can be formulated as: G(l)ND = {{vi ⊙ ρ (l) i | vi ∈ V}, E}, (8) where ρ(l)i ∈ {0, 1} is drawn from a Bernoulli distribution parameterized by ω (l) i , i.e., ρ (l) i ∼ Bern(ω (l) i ), which denotes whether to keep the node vi. Simply removing the selected node alongside all its connections will cause a dramatic change of the bipartite graph structure thus exerting influence on the information aggregation and making the training unstable. Thus instead of removing the selected node, we replace the selected node v with its local subgraph’s representation to obscure its original representation and retain its corresponding edges. For node v, we perform random walk on the bipartite graph G with its walk length setting as k, then we take the mean pooling of sampled nodes as v’s local subgraph’s representation. Edge-Dropping View The goal of the Edge-Dropping view is to generate a subgraph filtering out noisy edges and intentionally decreasing the influence of popular nodes for GCN layers. Similarly to the Node-Dropping view, we create the Edge-Dropping view by learnable edge dropping: G(l)ED = {V, {eij ⊙ ρ (l) ij | eij ∈ E}}, (9) where ρ(l)ij ∈ {0, 1} also follows ρ (l) ij ∼ Bern(ω (l) ij ) and denotes whether the edge eij is present. Following [26], we adopt multi-layer perceptrons (MLPs) to the parameter ω(l)i and ω (l) ij that control the whether to mask node vi and edge eij , respectively, which can be formulated as: ω (l) i = MLP (e (l) i ); ω (l) ij = MLP ([e (l) i ; e (l) j ]). (10) To efficiently optimize the multi-view structure learning in an end-to-end manner, we adopt the reparameterization trick [11] and relax the above binary entries ρ from being drawn from Bernoulli distribution to a deterministic function of parameter ω and an independent random variable ϵ, which can be formulate as: ρ = σ((log ϵ− log (1− ϵ) + ω)/τ), (11) where ϵ ∼ Uniform(0, 1), τ ∈ R+ indicates the temperature and σ(·) is the sigmoid function. With τ > 0, the function is smoothed with a well-defined gradient ∂ρ∂ω , enabling efficient optimization of the learnable establishment of Node-Dropping view and Edge-Dropping view during training. In inference, we drop the node or edge with a probability of less than 0.5. Afterwards, we perform GCNs to obtain the representation of users and items on these views: E (l) ND = GCN(E (l−1) ND ,G (l) ND), E (l) ED = GCN(E (l−1) ED ,G (l) ED), (12) where the initial E(0)ND = E (0) ED = E (0). After stacking L LightGCN layers, we also adopt the weighted sum to construct their final representation END and EED, respectively. For simplicity, we omit the augmentation type ND and ED in the symbols below, and use Ẽ to denote the representations of these augmentation views. 4.2 Information Bottleneck Contrastive Learning Although we couple the learnable augmentation process and the recommendation process together, we find relying solely on the recommendation objective can not well guide the dropout process to create optimal augmentation views. Thus we adopt the Information-Bottleneck principle to retain the minimum sufficient information in each view for the downstream recommendation. Specifically, different from conventional contrastive learning, we instead encourage the divergence between the representations of the augmentation view and the original graph while maximizing the information relevant to the recommendation task. By doing so, we can obtain comprehensive multiview representation and efficiently drop noisy collaborative information for the recommendation. Accordingly, the objective in Eq. 1 is induced as: min (E,Ẽ) L̃rec + I(E; Ẽ), (13) where L̃rec is the BPR loss of the representation from the augmentation view and I(E, Ẽ) represents the mutual information between representations from two corresponding views. According to [25, 19], minimizing the InfoNCE loss [4] is equivalence to maximizing the lower bound of the corresponding mutual information. So we adopt negative InfoNCE to estimate the mutual information between the representations of the augmentation view and the original graph, which consists of mutual information from both the user side and item side. Formally, for the user side mutual information, we consider the representations of the same users in the augmentation view and the original graph as the positive pairs (i.e., {(ei, ẽi) | vi ∈ U}), while representations of two different users in the augmentation view and the original graph as the negative pairs (i.e., {(ei, ẽj) | vi, vj ∈ U , i ̸= j}): I(Eu; Ẽu) = ∑ vi∈U log exp(s(ei, ẽi)/τ ′)∑ vj∈U exp(s(ei, ẽj)/τ ′) , (14) where s(·) measures the similarity between two vectors, which is set as cosine similarity function; τ ′ is the hyper-parameter indicating the temperature similar to Eq. 11. Analogously, we can obtain the mutual information from item side I(Ei; Ẽi) and the overall mutual information can be obtained by combining mutual information from two sides: I(E; Ẽ) = I(Eu; Ẽu) + I(Ei; Ẽi). 4.3 Optimization To obtain comprehensive multi-view representations, we utilize two parameterized networks to learn to create the Node-Dropping view and the Edge-Dropping view simultaneously. In order to integrally explore both views for better recommendation, we jointly optimize the recommendation tasks of these views and the self-supervised IB contrastive learning: L = Lrec + LNDrec + LEDrec + λ(I(E,END) + I(E,EED)) + β∥Θ∥22, (15) where LNDrec and LNBrec are the recommendation objective of the Node-Dropping view and EdgeDropping view respectively. The last term is an L2 regularization. λ and β are the hyper-parameters controlling the effect strength of the IB contrastive learning task and L2 regularization, respectively. Proposition 1. Formally, we denote the learned augmentation view as G̃, the noisy graph structure as G′, and the downstream recommendation information as YRec. Suppose G′ is irrelevant to YRec, the mutual information I(G′; G̃) is upper bounded by I(G; G̃)− I(YRec; G̃): I(G′; G̃) ≤ I(G; G̃)− I(YRec; G̃). (16) Proof. Following the Markov chain assumption in [1], we suppose G is defined by Y and G′. And we can define the following Markov chain (YRec,G′) → G → G̃. According to the Data Processing Inequality, we have: I(G; G̃) ≥ I((YRec,G′); G̃) = I(G′; G̃) + I(YRec; G̃|G′) = I(G′; G̃) +H(YRec|G′)−H(YRec|G′; G̃). (17) Since G′ and YRec are independent, we have H(YRec|G′) = H(YRec). Also, it’s straightforward that H(YRec|G′; G̃) ≤ H(YRec|G̃). Thus we can simplify Eq. 17 as follow: I(G; G̃) ≥ I(G′; G̃) +H(YRec)−H(YRec|G̃) = I(G′; G̃) + I(YRec; G̃). (18) Thus we obtain that I(G′; G̃) ≤ I(G; G̃)− I(YRec; G̃), where I(YRec; G̃) is inverse proportional to the L̃rec in Eq. 13. Eq. 16 proves that optimizing the IB contrastive objective in Eq. 13 is equivalent to minimizing the mutual information between the learned augmentation view and noisy structure. Specifically, it provides theoretical guarantees that the IB contrastive learning leads to the noiseinvariance property by compressing the information in both the augmentation views. Meanwhile, the IB contrastive objective also restricts the augmentation view to be predictive for the recommendation task, which can intentionally reduce the influence of popular nodes while preserving information of the isolated nodes, and thus help to mitigate the popularity bias. 5 Experiments 5.1 Experimental Setup Dataset Description Three public available datasets are employed in our experiments, i.e., Yelp2018, MovieLens-1M and Douban. The detailed description can be found in the Appendix. For each dataset, we randomly select 80% of the historical interactions of each user as the training set, 10% of those as the validation set, and the remaining 10% as the test set. Evaluation metrics To evaluate the performance of all methods, we adopt a ranking-based metric namely Normalized Discounted Cumulative Gain@k (NDCG@k) and a relevancy-based metric Hit Ratio@k (RECALL@k). The formulations of the two metrics are in the Appendix. As suggested by Krichene and Rendle [13], we perform item ranking on all the candidate items instead of the sampled item sets to calculate above metrics, which guarantees that the evaluation process is unbiased. Compared Methods We compare our CGI with three classes of baseline methods: (1) MFbased methods, i.e., BPRMF [17] and NCF [7], (2) GNNs-based methods, i.e., NGCF [30] and LightGCN [8], and (3) CL-based methods, i.e., DNN+SSL [36] and SGL [31]. We give a detailed introduction to these baselines in the Appendix. Note that DNN+SSL applies augmentation on items’ feature which is not applicable in our case. So following [31], we apply the augmentations on ID embeddings of items instead. Hyper-parameter We initialize the latent vectors of both users and items with small random values for all models. The parameters for baseline methods are initialized as in the original papers, and are then carefully tuned to achieve optimal performances. For a fair comparison, the dimensions of both the user and item embeddings are all fixed to 64. We use Adam with β1 = 0.9, β2 = 0.999, ϵ = 1e−8 to optimize all these methods. The batch size is set to 2048. The learning rate is set as 0.005 and decayed at the rate of 0.9 every five epochs. We set λ = 0.02 and β = 0.01 for the coefficients in Eq. 15. More details about hyper-parameter settings of baselines can be found in the Appendix. 5.2 Performance Comparisons We summarize the performance of different algorithms in terms of NDCG@k and RECALL@k (k = 10, 20) over three datasets in Table 1. The experimental results demonstrate that CGI outperforms other methods on all evaluation metrics. We conduct the significant test and p-values < 0.05 indicates that the improvements of our CGI are statistically significant. Besides, we observe that the GNNs-based methods perform better than the MF-based models. These results verify that exploiting higher-order connectivity in the user-item bipartite graph is essential to improve the recommendation performance. This may also be the reason why the performance of DNN+SSL is inferior to those of SGL and our CGI when all applying contrastive learning. We can see that the CL-based graph learning methods, including our CGI, consistently outperform the GNNsbased models, which verifies the effectiveness of contrastive learning for representation learning. Besides, our CGI outperforms SGL by a large margin. The results demonstrate that compared with randomly dropping in SGL, the learnable graph augmentations optimized by information bottleneck can create optimal augmentation views and capture more comprehensive collaborative signals. 5.3 Ablation Studies Effectiveness of Learnable Augmentation To understand the respective effects of both the nodedropping and edge-dropping in learnable augmentation, we conduct ablation studies on Yelp2018 and Movielens-1M. As shown in Table 2, we report NDCG@10 and RECALL@10 of CGI and SGL in different versions. Specifically, CGIND and CGIED denote CGI with only node-dropping view and edge-dropping view being adopted, respectively. SGLND and SGLED denotes the augmentation view in SGL is created by random node dropout and edge dropout, respectively. We find that: (1) Our CGI achieves obvious improvements compared with SGL in different types of augmentation, which again verifies the effectiveness of the learnable graph augmentation optimized by information bottleneck. (2) CGI performs better in both CGI-ND and CGI-ED. We ascribe these to the ability of multi-view learning, which enables the final representation to capture collaborative information of different semantics and thus enhances the robustness and expressiveness of the model. Yelp2018Model NDCG@10 RECALL@10 LightGCN 0.0344 0.0530 CGI 0.0392 0.0584 SGL-ND 0.0356 0.0544 CGI-ND 0.0369 0.0569 SGL-ED 0.0367 0.0552 CGI-ED 0.0379 0.0579 MovieLens-1MModel NDCG@10 RECALL@10 LightGCN 0.1696 0.1865 CGI 0.1979 0.2180 SGL-ND 0.1765 0.1948 CGI-ND 0.1934 0.2119 SGL-ED 0.1800 0.1965 CGI-ED 0.1916 0.2088 Table 2: Comparison among models. 1 2 3 4 5 Item Group 0.01 0.02 0.03 0.04 0.05 Re ca ll Yelp2018 model LightGCN SGL CGI 1 2 3 4 5 Item Group 0.02 0.04 0.06 0.08 0.10 0.12 Re ca ll Movielens-1M model LightGCN SGL CGI Figure 3: Performance of different item groups (3) The performance of CGI-ED is better than that of CGI-ND in the sparse dataset Yelp2018, while worse in the dense dataset Movielens-1M. We can speculate that the interaction noises are more significant in the sparse dataset with less useful information, in which CGI-ND is not so flexible. Because it will remove all influence (i.e., edges) of popular nodes, which is hard to be restored with scarce interactions. But in the dense dataset, popularity bias becomes more significant, which makes CGI-ND more efficient by blocking the influence from popular users or items. Accuracy against Popularity Bias To verify whether CGI is capable of mitigating popularity bias, We split the item set I into 5 groups (1-5) evenly based on their popularity. The larger the GroupID is, the larger degrees the items have. Following [31], we decompose the RECALL@10 metric of the whole dataset into the contributions of the above ten groups of items: RECALL(g) = ∑k i=1 rel (g) i |Iutest| , (19) where rel(g)i = 1 denotes the item at the rank i is in the test set and g-th item group at the same time. As such, RECALL(g) measures the performance over the g-th item group. From Fig. 3, we can see that recommender systems tend to recommend popular items, while leaving unpopular items less likely to be discovered, which further exacerbates the long-tail distribution. Also, our CGI can significantly improve the recommendation accuracy on long-tail items. Although both GCL methods CGI and SGL, show no superiority on the top 20% items, from the overall improvements in Table 1, we can see they can better capture the long-tail items’ information in user preference representations. Robustness to Interaction Noises To verify CGI’s robustness to interaction noises, we generate different proportions of negative interactions (i.e., 5%, 10%, 15%, and 20%) to contaminate the training set, and report the performance on the unchanged test set. Fig. 4 shows the NDCG@10 on Yelp2018 and Movielens-1M and the performance degradation ratio of the corresponding contaminated training set. It’s obvious that the more noise we add, the worse performance all the models yield, since all the models utilize LightGCN as the basic backbone, which fully relies on the adjacency matrix AG to refine the representations of users and items in Eq. 7. However, the performance degradation of our CGI is smaller than other models in both datasets. What’s more, the gaps between CGI and other models grow larger as the noise increase. This suggests that our CGI framework can mitigate the noise in interaction data more efficiently, and our learnable augmentation optimized by the IB contrastive learning exhibits good robustness in the presence of a high proportion of noise, which is consistent with our proof in Sect. 4.3. We can observe that CGI is more robust on Movielens-1M. This makes sense since Movielens-1M is much denser than Yelp2018 according to the statistics in the Appendix and thus the bipartite graph of Yelp2018 will be more sensitive to the added noise. Effectiveness of Information Bottleneck To investigate the effect of information bottleneck, we consider the following variants of CGI with different contrastive learning strategies, our complete methods (CGI), our method without introducing contrastive learning (GL), and our method that maximizes the correspondence among different views (i.e., min L̃rec − I(E; Ẽ)) (GCL). Fig. 5 shows the recommending training loss w.r.t. the number of training steps and the evaluation results on Yelp, from which we observe that the multi-view graph learning frameworks driven by contrastive learning are easier to converge. Specifically, when maximizing the mutual information among views, the GCL framework drops more quickly at the very beginning and turns to a steadily decreasing state afterward. However, with IB contrastive learning, the recommending loss of our CGI appears to have a declining trend after an initial sharp drop, instead of getting an early-stop, which is more likely to converge to a better local optimum. This is probably why CGI has better performance than both GL and GCL, as illustrated by the right part of Fig. 5. Also, we find that the multi-view graph learning can benefit more from the IB contrastive learning than the conventional one, since it can encourage to drop the noisy information irreverent for the recommendation as illustrated in Sect. 4.3. Performance with Other GNNs To verify the generalization of our method on other GNNs, we tried CGI and the baseline SGL on two other popular GNN-based recommenders GC-MC [24] and NGCF [30]. The experimental results are shown in Table 3. Both graph contrastive learning methods have shown improvements to the backbones. On NGCF, CGI shows consistent superiority compared to SGL. On GC-MC, CGI does not have significant improvement compared to SGL, probably due to the fact that GC-MC only utilizes one layer of GCN, which means it can only adopt 1-hop neighbors for learning, thus making the learnable augmentation challenging to fetch enough information. 6 Conclusions In this paper, we propose novel Contrastive Graph Structure Learning via Information Bottleneck (CGI) to learn better augmentation from different aspects for the multi-view representation learning of recommendation. In particular, we propose a fully differentiable learner to drop nodes and edges to construct different types of augmentation views coupled with the recommendation. We innovatively integrate information bottleneck into the multi-view contrastive learning process for recommendation and prove its efficiency. The extensive experiments conducted on three public datasets verify the effectiveness of CGI. Acknowledgments and Disclosure of Funding This work was supported by Alibaba Group through Alibaba Research Intern Program.
1. What is the focus and contribution of the paper regarding recommendation systems? 2. What are the strengths of the proposed approach, particularly in addressing popularity bias and noisy user-item interactions? 3. What are the weaknesses of the paper, especially in terms of experimental design and analysis? 4. Do you have any concerns or suggestions regarding the methodology or presentation of the paper? 5. Are there any limitations or potential drawbacks of the proposed approach that should be acknowledged and addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work aims to alleviate the effect of popularity bias and noisy user-item interactions in graph convolutional networks (GCNs) for recommendation. To this end the authors propose a GCN-based recommender method called Contrastive Graph Structure Learning via Information Bottleneck (CGI). In addition to the standard GCN layer producing node representations from the observed user-item graph, CGI includes a node and an edge dropping components whose role is to generate augmentations of the observed graph by dropping some nodes and edges respectively. To improve the quality of the learned augmentations, the authors introduce (in addition to the recommendation objective) an Information Bottleneck (IB)-based contrastive objective, which encourages the representations learned from the augmented graphs to be independent from those learned from the observed graph. The proposed method is evaluated and compared to some existing recommender models on three real-world datasets. Strengths And Weaknesses Strengths. The paper is well written and technically sound. This work investigates interesting ideas such as the IB-based contrastive objective, as well as learning the graph augmentations instead of relying on random or predefined procedures to generate them. Empirical results show that CGI outperforms the chosen baselines on the recommendation task and seems more robust to noisy user-item interactions. Weaknesses. The claim regarding the ability of the proposed method to alleviate the popularity bias is not well supported in the paper, nor by theoretical analyses, neither by convincing targeted experiments. For examples, I would recommend reporting statistics about the popularity distribution of the recommended items by the different baselines. Also, some quantitative and qualitative experiments on how popular/rare items are ranked by the different models for a given set of users would be meaningful (see for instance Figures 2 and 3 in [1]). Experiments are weak. Three datasets are considered for evaluation: Yelp, MovieLens and Douban. However, most of the results are on Yelp and/or MovieLens, except in Table 1. I would recommend reporting the results of every experiment across all the three datasets. Additional comments/questions. In section 4.3, the part related to proposition 1 is a bit hard to follow and connect to the objective of eq. 13. This is due to using different notations for the mutual information terms in the proposition and in eq. 13. Please consider improving the notations. I would recommend keeping the legend consistent across all the experiments. For instance, CGI is represented by a green bar in figures 3 and 4, while in figure 5 it is represented by a blue bar. What type of significance test is used in the experiments, and how many trials are performed for every algorithm in Table 1. References. [1] Liang, Dawen, et al. "Factorization meets the item embedding: Regularizing matrix factorization with item co-occurrence." Proceedings of the 10th ACM conference on recommender systems. 2016. Questions Please refer to the strengths and weaknesses section. Limitations Yes.
NIPS
Title Contrastive Graph Structure Learning via Information Bottleneck for Recommendation Abstract Graph convolution networks (GCNs) for recommendations have emerged as an important research topic due to their ability to exploit higher-order neighbors. Despite their success, most of them suffer from the popularity bias brought by a small number of active users and popular items. Also, a real-world user-item bipartite graph contains many noisy interactions, which may hamper the sensitive GCNs. Graph contrastive learning show promising performance for solving the above challenges in recommender systems. Most existing works typically perform graph augmentation to create multiple views of the original graph by randomly dropping edges/nodes or relying on predefined rules, and these augmented views always serve as an auxiliary task by maximizing their correspondence. However, we argue that the graph structures generated from these vanilla approaches may be suboptimal, and maximizing their correspondence will force the representation to capture information irrelevant for the recommendation task. Here, we propose a Contrastive Graph Structure Learning via Information Bottleneck (CGI) for recommendation, which adaptively learns whether to drop an edge or node to obtain optimized graph structures in an end-to-end manner. Moreover, we innovatively introduce the Information Bottleneck into the contrastive learning process to avoid capturing irrelevant information among different views and help enrich the final representation for recommendation. Extensive experiments on public datasets are provided to show that our model significantly outperforms strong baselines. 2 1 Introduction Recommender systems have been widely deployed to alleviate information overload in diverse scenarios including e-commerce, online news and multimedia contents, which requires high-quality user and item representations learned from the historical interactions [7, 14, 43]. Recently, thanks to the powerful capability in modeling graph-structured data, Graph Convolution Networks (GCNs) provide an efficient way to integrate multi-hop neighbors into node representation learning and show prominent performance in recommendation [37, 30, 8]. Although encouraging performance has been achieved, we argue that most GCN-based recommender models suffer from the following two limitations, of which the impacts on the user’s exhibited preference are presented in Fig. 1. i) Popularity Bias. Items inherently have different customer sizes, and this imbalance can potentially lead to popularity bias [45]. In most recommender systems, the customer size for items usually follows a long-tail distribution, which means a few items have massive customers while the majority have few customers. Similarly, most users have few interactions. This skewed data distribution will bias GCN-based models towards the popular users and items easily ∗Equal contributions from both authors. This work is done when Chunyu Wei works as an intern at Alibaba. 2The code is available on https://github.com/weicy15/CGI. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). during multi-hop convolution, which may hamper the representation learning. ii) Interaction Noises. User-item interactions usually contain noises especially in the scenarios with only implicit feedbacks (e.g., clicks and purchases). More specifically, these noisy edges in the bipartite graph are not necessarily aligned with user preferences [18], since it’s common that the user clicks something by mistake or finds something boring after purchasing. GCN-based models are known to be vulnerable to the quality of the input graphs [44], which means aggregating misleading neighborhood information is likely to lead to sub-optimal performance. Recent advances in graph contrastive learning [27, 38] have identified an effective training scheme for mitigating popularity bias and increasing robustness for noise on graph-based tasks, which inspire many studies [31, 41, 33] to introduce this training scheme to enhance representation learning for recommendations. Nevertheless, existing studies have two limitations. First, most methods perform data augmentation by randomly dropping edges/nodes to change the graph structure [31], shuffling the embeddings to corrupt the node representations [41], or relying on predefined rules [6]. However, within unsupervised settings, structures created from these vanilla approaches may be suboptimal for recommendation tasks and also lack persuasive rationales for why the randomly dropped edges/nodes alleviate the popularity bias and interaction noises. Like the obtained representation No.1 in Fig. 1, structures created from these vanilla approaches may deviate from the optimal area. Second, most methods generate multiple views only to serve as an auxiliary task by maximizing the agreement of node representations among these views, which may force the user or item representation in different views to capture the information irrelevant for the recommendation task. For example, the obtained representation No.1 in Fig. 1 contains much information irrelevant to the real preference. So we believe that a good augmentation (e.g., No.2 in Fig. 1) should cover as much optimal area as possible while being as small as possible to reduce useless information. To address the aforementioned limitations, we propose Contrastive Graph Structure Learning via Information Bottleneck (CGI) for recommendation, which contains two key components: learnable graph augmentation and information bottleneck contrastive learning. First, we propose learnable graph augmentation to learn whether to drop an edge or node to transform the original bipartite graph into correlated views, which will be jointly optimized with the downstream recommendation in an end-to-end fashion. As a result, these generated views can intentionally reduce the influence of popular nodes while preserving information of the isolated nodes, and thus help to mitigate the popularity bias. The intuition behind is that random dropout will indiscriminately drop nodes or edges regardless of the corresponding node degrees, while by message passing mechanism, GCNs are easier to reconstruct the missing information of popular users or items, but much harder to reconstruct those isolated nodes with few connections, thus may overemphasize those high-degree nodes. These generated views with debiased information are all fed into the GCN-based recommender for multi-view representation learning to increase the ability against popularity bias. Second, we proposed to integrate different views into a compact representation for the downstream recommendation tasks, which can further improve the robustness of the model. Generally, when information from different views complements each other, it can be expected that the multi-view representation learning approaches can improve downstream performance [28]. So we argue that simply maximizing the mutual information in the conventional graph contrastive learning may push the representations of different views to capture information irrelevant to the downstream task. Inspired by the recent advances of Information Bottleneck (IB) [32], which encourages the representation to capture the minimum sufficient information for the downstream task, we utilize the IB principle to minimize the mutual information between the original graph and the generated views while maintaining the downstream recommendation performance of each view. By doing so, the learnable graph augmenters can learn to remove noisy interactions in the original graph as much as possible, since these interactions are of no help for the downstream recommendation. Also, the IB principle helps representations of different views to capture collaborative information of different semantics complement to each other. The contributions of this paper are summarized as follows. (1) We propose the CGI to construct optimized graph structures by dropping nodes and edges adaptively for the multi-view representation learning of users and items, which provides rationales for alleviating the popularity bias. (2) To efficiently drop information irrelevant to the downstream recommendation, we innovatively integrate information bottleneck into the multi-view contrastive learning process for recommendation and prove that it can better mitigate interaction noises. (3) Experimental results show that our method outperforms the state-of-the-art methods on three benchmark datasets from different domains. 2 Related Work Graph-based Recommendation Early works exploiting the user-item bipartite graph for recommendation like ItemRank [3] usually followed the label propagation mechanism to propagate users’ preference over the graph, i.e., encouraging connected nodes to have similar labels. In recent years, Graph Convolution Networks (GCNs) have made great progress in representation learning tasks including node classification and link prediction [5, 12, 35]. Motivated by the strength of GCNs, several works [24, 8, 37, 30] have adapted GCNs on the user-item bipartite graph to learn more robust latent representations for users and items in recommender systems. Contrastive Learning Contrastive Learning (CL) [22, 9] was firstly proposed to train CNNs for image representation learning. Graph Contrastive Learning (GCL) applies the idea of CL on GNNs. DGI [27] and InfoGraph [19] learn node representations according to the mutual information between nodes and the whole graph. Peng et al. [15] developed an unsupervised learning model trained by maximizing mutual information of nodes between the input and output of a graph neural encoder. Hu et al. [10] extend the idea to build contrastive pairs between nodes and subgraphs. In addition, GCC [16] designs the pre-training task as subgraph instance discrimination in and across networks and leverage CL to empower GNNs. And a very recent work SGL [31] supplements the classical supervised task of recommendation with an auxiliary graph CL task, which generates multiple views of a node and maximizes the agreement between different views. However, it differs from our work in: (1) SGL [31] generates contrastive pairs by randomly dropping edges/nodes, while our work adopts a learnable augmenter to optimize the generated views. (2) SGL [31] utilizes conventional CL as an auxiliary task by maximizing the agreement of augmentation views, while we propose to encourage the differences between the augmentation views and the original graph. Learning by Information-Bottleneck Information Bottleneck (IB) [23] is an approach based on information theory, which states that if the obtained representation discards information from the input which is not useful for a given task, it will increase robustness for the downstream tasks. Besides, the information bottleneck principle is used in multi-view representation learning [34, 29, 2]. Formally, given the original data X with label Y, IB is to obtain a compact and effective representation Z of X. And the objective of the IB principle is as follows: max Z I(Y;Z)− βI(X;Z), (1) where β is the coefficient to balance the mutual information I(Y,Z) and I(X,Z). Recently, some works proposed to integrate the IB principle into the graph learning process. You et al. [39] propose a variational graph auto-encoder to generate contrastive views and the downstream contrastive learning utilizes IB performing on graph representations as the unsupervised loss. Both Yu et al. [40] and Yu et al. [42] aim to directly reveal the vital substructure in the subgraph level, among which [1] learns a node assignment matrix to extract the subgraph, and implements the IB of two graphs by estimating the KL-divergence from graph latent representation with a statistic network (DONSKER-VARADHAN Representation of KL-divergence). And Yu et al. [42] employ noise injection to manipulate the graph, and customizes the Gaussian prior for each input graph and the injected noise, so as to implement the IB of two graphs with a tractable variational upper bound. Our CGI differs from them, since we do not directly aim to find an optimal graph structure, instead we try to learn the graph structure complementing the original one. Then by integrating different views into a compact representation, we obtain the optimal node representation for the downstream task. Sun et al. [20] learn to mask node feature and generates new structure with the masked feature. Afterward, [20] adopt GNN to learn the distribution of graph representation and utilize the KL-divergence between the learned distribution and the prior distribution to implement the IB. All these methods aim to find a better structure or representation to replace the original graph for the downstream task, while our CGI follows a multi-view representation learning schema. IB is utilized to minimize the mutual information between the original graph and the generated views while maintaining the downstream recommendation performance of each view. Besides the noise-invariance property, IB helps representations of different views to capture collaborative information of different semantics that complement each other. AD-GCL [21] shares some ideas with our CGI but there are fundamental differences. Specifically, AD-GCL focuses on training self-supervised GNNs for graph-level tasks. In contrast, CGI aims to mitigate the popularity bias and interaction noises of node-level collaborative filtering (CF). In addition, AD-GCL adopts an adversarial strategy aiming to maximize the agreement of final representations of different views. Instead, our CGI minimizes the mutual information of different views to capture collaborative information of different semantics. To the best of our knowledge, this is the first study on leveraging the IB principle to enhance graph-based recommendations. 3 Preliminaries Problem Definition. Let U = {u1, u2, . . . , um} denotes the set of users, and let I = {i1, i2, . . . , in} denotes the set of items. We typically use a binary matrix R ∈ Rm×n to store user-item interactions (e.g., purchases and clicks), where rui = 1 indicates that user u consumed item i while rui = 0 means that item i is unexposed to user u or user u is not interested in item i. Following most existing works [30, 8], we represent interaction data as a user-item bipartite graph G = {V, E}, where the node set V = U ∪ I and the edge set E = {eui|rui = 1, u ∈ U , i ∈ I}. The adjacency matrix AG can be formulated as follows: AG = [ 0 R RT 0 ] . (2) With respect to the adjacency matrix AG , the degree matrix DG ∈ N(m+n)×(m+n) is a diagonal matrix, in which each entry DG [i, i] denotes the number of nonzero entries in the i-th row of AG . GCN Paradigm. The core of graph convolution on graph G is to update the ego node by aggregating the representations of its neighbor nodes, which can be formulated as follows: E(l) = GCN(E(l−1),G), (3) where E(l−1) is the current representations of nodes and E(l) is the updated representations after the graph convolution layer. E(0) is the initial inputs, which are usually the ID embeddings (trainable parameters). From the vector level, Eq. 3 can be interpreted as: e(l)u = f (l) combine(e (l−1) u , f (l) aggregate({e (l) i |i ∈ Nu})), (4) e (l) i = f (l) combine(e (l−1) i , f (l) aggregate({e(l)u |u ∈ Ni})), (5) where Nu and Ni are the neighbor node set of user u and item i, respectively. There are many works designing different fcombine and faggregate [5, 26, 35]. Usually, there will be readout function to generate the final representations for the recommendation task: e = freadout({e(l)|l = 0, 1, . . . , L}). (6) For example, freadout can be concatenation [30], weighted sum [8] and retaining the last output [24]. LightGCN Brief. In this paper, we implement our CGI on the simple but effective GCN-based recommendation model LightGCN. It adopts weighted sum aggregators and abandon the use of feature transformation and nonlinear activation, of which the matrix form can be formulated as: E(l) = (D − 12 G AGD − 12 G )E (l−1), l ∈ N+, (7) where E(l−1) = [E(l−1)u ,E (l−1) i ] is the output of the previous LightGCN layer or the initial E (0). At last, LightGCN implement the freadout by weighted sum, in which the weight of each layer is set as 1 L+1 following the original work. After obtaining the representations of users and items, the inner product r̂ui = eTuei is used to predict preference score, which is commonly adopted in most recommender system: LightGCN employ the Bayesian Personalized Ranking (BPR) loss [17] to optimize the model parameters: Lrec =∑ (u,i,j)∈O −lnσ(r̂ui − r̂uj), where O = {(u, i, j)|(u, i) ∈ R+, (u, j) ∈ R−} is the pairwise training data, in which R+ denotes the observed interactions, and R− denotes the unobserved interactions. In this work, we also choose it as the objective function for the recommendation task. 4 Methodology The framework of CGI is illustrated in Fig. 2 and we detail the inference in Appendix. 4.1 Learnable Multi-View Augmentation Most of GCN-based recommendation like LightGCN [8] fully relies on the adjacency matrix AG to refine the representations of users and items in Eq. 7. However, AG may contain many biased and noisy information as discussed in Sec. 1, which continue to propagate misleading information as the LightGCN goes deeper. On the other hand, the vanilla randomly dropout in most contrastive learning for recommendation cannot create powerful views to alleviate popularity bias and interaction noises. We hence utilize parameterized networks to generate the layer-wise optimized augmentation views. Specifically, we assign different graph convolution layers with different learned subgraphs coupled with the downstream recommendation and thus obtain multi-view user and item representations. We elaborate on two types of learnable augmentations as follows. Node-Dropping View As illustrated in Sect. 1, popular users or items in the graph may skew the data distribution and thus hinder the GCN-based recommender. So we perform learnable node dropping at each layer to mask those the influential nodes and create the Node-Dropping view, which can be formulated as: G(l)ND = {{vi ⊙ ρ (l) i | vi ∈ V}, E}, (8) where ρ(l)i ∈ {0, 1} is drawn from a Bernoulli distribution parameterized by ω (l) i , i.e., ρ (l) i ∼ Bern(ω (l) i ), which denotes whether to keep the node vi. Simply removing the selected node alongside all its connections will cause a dramatic change of the bipartite graph structure thus exerting influence on the information aggregation and making the training unstable. Thus instead of removing the selected node, we replace the selected node v with its local subgraph’s representation to obscure its original representation and retain its corresponding edges. For node v, we perform random walk on the bipartite graph G with its walk length setting as k, then we take the mean pooling of sampled nodes as v’s local subgraph’s representation. Edge-Dropping View The goal of the Edge-Dropping view is to generate a subgraph filtering out noisy edges and intentionally decreasing the influence of popular nodes for GCN layers. Similarly to the Node-Dropping view, we create the Edge-Dropping view by learnable edge dropping: G(l)ED = {V, {eij ⊙ ρ (l) ij | eij ∈ E}}, (9) where ρ(l)ij ∈ {0, 1} also follows ρ (l) ij ∼ Bern(ω (l) ij ) and denotes whether the edge eij is present. Following [26], we adopt multi-layer perceptrons (MLPs) to the parameter ω(l)i and ω (l) ij that control the whether to mask node vi and edge eij , respectively, which can be formulated as: ω (l) i = MLP (e (l) i ); ω (l) ij = MLP ([e (l) i ; e (l) j ]). (10) To efficiently optimize the multi-view structure learning in an end-to-end manner, we adopt the reparameterization trick [11] and relax the above binary entries ρ from being drawn from Bernoulli distribution to a deterministic function of parameter ω and an independent random variable ϵ, which can be formulate as: ρ = σ((log ϵ− log (1− ϵ) + ω)/τ), (11) where ϵ ∼ Uniform(0, 1), τ ∈ R+ indicates the temperature and σ(·) is the sigmoid function. With τ > 0, the function is smoothed with a well-defined gradient ∂ρ∂ω , enabling efficient optimization of the learnable establishment of Node-Dropping view and Edge-Dropping view during training. In inference, we drop the node or edge with a probability of less than 0.5. Afterwards, we perform GCNs to obtain the representation of users and items on these views: E (l) ND = GCN(E (l−1) ND ,G (l) ND), E (l) ED = GCN(E (l−1) ED ,G (l) ED), (12) where the initial E(0)ND = E (0) ED = E (0). After stacking L LightGCN layers, we also adopt the weighted sum to construct their final representation END and EED, respectively. For simplicity, we omit the augmentation type ND and ED in the symbols below, and use Ẽ to denote the representations of these augmentation views. 4.2 Information Bottleneck Contrastive Learning Although we couple the learnable augmentation process and the recommendation process together, we find relying solely on the recommendation objective can not well guide the dropout process to create optimal augmentation views. Thus we adopt the Information-Bottleneck principle to retain the minimum sufficient information in each view for the downstream recommendation. Specifically, different from conventional contrastive learning, we instead encourage the divergence between the representations of the augmentation view and the original graph while maximizing the information relevant to the recommendation task. By doing so, we can obtain comprehensive multiview representation and efficiently drop noisy collaborative information for the recommendation. Accordingly, the objective in Eq. 1 is induced as: min (E,Ẽ) L̃rec + I(E; Ẽ), (13) where L̃rec is the BPR loss of the representation from the augmentation view and I(E, Ẽ) represents the mutual information between representations from two corresponding views. According to [25, 19], minimizing the InfoNCE loss [4] is equivalence to maximizing the lower bound of the corresponding mutual information. So we adopt negative InfoNCE to estimate the mutual information between the representations of the augmentation view and the original graph, which consists of mutual information from both the user side and item side. Formally, for the user side mutual information, we consider the representations of the same users in the augmentation view and the original graph as the positive pairs (i.e., {(ei, ẽi) | vi ∈ U}), while representations of two different users in the augmentation view and the original graph as the negative pairs (i.e., {(ei, ẽj) | vi, vj ∈ U , i ̸= j}): I(Eu; Ẽu) = ∑ vi∈U log exp(s(ei, ẽi)/τ ′)∑ vj∈U exp(s(ei, ẽj)/τ ′) , (14) where s(·) measures the similarity between two vectors, which is set as cosine similarity function; τ ′ is the hyper-parameter indicating the temperature similar to Eq. 11. Analogously, we can obtain the mutual information from item side I(Ei; Ẽi) and the overall mutual information can be obtained by combining mutual information from two sides: I(E; Ẽ) = I(Eu; Ẽu) + I(Ei; Ẽi). 4.3 Optimization To obtain comprehensive multi-view representations, we utilize two parameterized networks to learn to create the Node-Dropping view and the Edge-Dropping view simultaneously. In order to integrally explore both views for better recommendation, we jointly optimize the recommendation tasks of these views and the self-supervised IB contrastive learning: L = Lrec + LNDrec + LEDrec + λ(I(E,END) + I(E,EED)) + β∥Θ∥22, (15) where LNDrec and LNBrec are the recommendation objective of the Node-Dropping view and EdgeDropping view respectively. The last term is an L2 regularization. λ and β are the hyper-parameters controlling the effect strength of the IB contrastive learning task and L2 regularization, respectively. Proposition 1. Formally, we denote the learned augmentation view as G̃, the noisy graph structure as G′, and the downstream recommendation information as YRec. Suppose G′ is irrelevant to YRec, the mutual information I(G′; G̃) is upper bounded by I(G; G̃)− I(YRec; G̃): I(G′; G̃) ≤ I(G; G̃)− I(YRec; G̃). (16) Proof. Following the Markov chain assumption in [1], we suppose G is defined by Y and G′. And we can define the following Markov chain (YRec,G′) → G → G̃. According to the Data Processing Inequality, we have: I(G; G̃) ≥ I((YRec,G′); G̃) = I(G′; G̃) + I(YRec; G̃|G′) = I(G′; G̃) +H(YRec|G′)−H(YRec|G′; G̃). (17) Since G′ and YRec are independent, we have H(YRec|G′) = H(YRec). Also, it’s straightforward that H(YRec|G′; G̃) ≤ H(YRec|G̃). Thus we can simplify Eq. 17 as follow: I(G; G̃) ≥ I(G′; G̃) +H(YRec)−H(YRec|G̃) = I(G′; G̃) + I(YRec; G̃). (18) Thus we obtain that I(G′; G̃) ≤ I(G; G̃)− I(YRec; G̃), where I(YRec; G̃) is inverse proportional to the L̃rec in Eq. 13. Eq. 16 proves that optimizing the IB contrastive objective in Eq. 13 is equivalent to minimizing the mutual information between the learned augmentation view and noisy structure. Specifically, it provides theoretical guarantees that the IB contrastive learning leads to the noiseinvariance property by compressing the information in both the augmentation views. Meanwhile, the IB contrastive objective also restricts the augmentation view to be predictive for the recommendation task, which can intentionally reduce the influence of popular nodes while preserving information of the isolated nodes, and thus help to mitigate the popularity bias. 5 Experiments 5.1 Experimental Setup Dataset Description Three public available datasets are employed in our experiments, i.e., Yelp2018, MovieLens-1M and Douban. The detailed description can be found in the Appendix. For each dataset, we randomly select 80% of the historical interactions of each user as the training set, 10% of those as the validation set, and the remaining 10% as the test set. Evaluation metrics To evaluate the performance of all methods, we adopt a ranking-based metric namely Normalized Discounted Cumulative Gain@k (NDCG@k) and a relevancy-based metric Hit Ratio@k (RECALL@k). The formulations of the two metrics are in the Appendix. As suggested by Krichene and Rendle [13], we perform item ranking on all the candidate items instead of the sampled item sets to calculate above metrics, which guarantees that the evaluation process is unbiased. Compared Methods We compare our CGI with three classes of baseline methods: (1) MFbased methods, i.e., BPRMF [17] and NCF [7], (2) GNNs-based methods, i.e., NGCF [30] and LightGCN [8], and (3) CL-based methods, i.e., DNN+SSL [36] and SGL [31]. We give a detailed introduction to these baselines in the Appendix. Note that DNN+SSL applies augmentation on items’ feature which is not applicable in our case. So following [31], we apply the augmentations on ID embeddings of items instead. Hyper-parameter We initialize the latent vectors of both users and items with small random values for all models. The parameters for baseline methods are initialized as in the original papers, and are then carefully tuned to achieve optimal performances. For a fair comparison, the dimensions of both the user and item embeddings are all fixed to 64. We use Adam with β1 = 0.9, β2 = 0.999, ϵ = 1e−8 to optimize all these methods. The batch size is set to 2048. The learning rate is set as 0.005 and decayed at the rate of 0.9 every five epochs. We set λ = 0.02 and β = 0.01 for the coefficients in Eq. 15. More details about hyper-parameter settings of baselines can be found in the Appendix. 5.2 Performance Comparisons We summarize the performance of different algorithms in terms of NDCG@k and RECALL@k (k = 10, 20) over three datasets in Table 1. The experimental results demonstrate that CGI outperforms other methods on all evaluation metrics. We conduct the significant test and p-values < 0.05 indicates that the improvements of our CGI are statistically significant. Besides, we observe that the GNNs-based methods perform better than the MF-based models. These results verify that exploiting higher-order connectivity in the user-item bipartite graph is essential to improve the recommendation performance. This may also be the reason why the performance of DNN+SSL is inferior to those of SGL and our CGI when all applying contrastive learning. We can see that the CL-based graph learning methods, including our CGI, consistently outperform the GNNsbased models, which verifies the effectiveness of contrastive learning for representation learning. Besides, our CGI outperforms SGL by a large margin. The results demonstrate that compared with randomly dropping in SGL, the learnable graph augmentations optimized by information bottleneck can create optimal augmentation views and capture more comprehensive collaborative signals. 5.3 Ablation Studies Effectiveness of Learnable Augmentation To understand the respective effects of both the nodedropping and edge-dropping in learnable augmentation, we conduct ablation studies on Yelp2018 and Movielens-1M. As shown in Table 2, we report NDCG@10 and RECALL@10 of CGI and SGL in different versions. Specifically, CGIND and CGIED denote CGI with only node-dropping view and edge-dropping view being adopted, respectively. SGLND and SGLED denotes the augmentation view in SGL is created by random node dropout and edge dropout, respectively. We find that: (1) Our CGI achieves obvious improvements compared with SGL in different types of augmentation, which again verifies the effectiveness of the learnable graph augmentation optimized by information bottleneck. (2) CGI performs better in both CGI-ND and CGI-ED. We ascribe these to the ability of multi-view learning, which enables the final representation to capture collaborative information of different semantics and thus enhances the robustness and expressiveness of the model. Yelp2018Model NDCG@10 RECALL@10 LightGCN 0.0344 0.0530 CGI 0.0392 0.0584 SGL-ND 0.0356 0.0544 CGI-ND 0.0369 0.0569 SGL-ED 0.0367 0.0552 CGI-ED 0.0379 0.0579 MovieLens-1MModel NDCG@10 RECALL@10 LightGCN 0.1696 0.1865 CGI 0.1979 0.2180 SGL-ND 0.1765 0.1948 CGI-ND 0.1934 0.2119 SGL-ED 0.1800 0.1965 CGI-ED 0.1916 0.2088 Table 2: Comparison among models. 1 2 3 4 5 Item Group 0.01 0.02 0.03 0.04 0.05 Re ca ll Yelp2018 model LightGCN SGL CGI 1 2 3 4 5 Item Group 0.02 0.04 0.06 0.08 0.10 0.12 Re ca ll Movielens-1M model LightGCN SGL CGI Figure 3: Performance of different item groups (3) The performance of CGI-ED is better than that of CGI-ND in the sparse dataset Yelp2018, while worse in the dense dataset Movielens-1M. We can speculate that the interaction noises are more significant in the sparse dataset with less useful information, in which CGI-ND is not so flexible. Because it will remove all influence (i.e., edges) of popular nodes, which is hard to be restored with scarce interactions. But in the dense dataset, popularity bias becomes more significant, which makes CGI-ND more efficient by blocking the influence from popular users or items. Accuracy against Popularity Bias To verify whether CGI is capable of mitigating popularity bias, We split the item set I into 5 groups (1-5) evenly based on their popularity. The larger the GroupID is, the larger degrees the items have. Following [31], we decompose the RECALL@10 metric of the whole dataset into the contributions of the above ten groups of items: RECALL(g) = ∑k i=1 rel (g) i |Iutest| , (19) where rel(g)i = 1 denotes the item at the rank i is in the test set and g-th item group at the same time. As such, RECALL(g) measures the performance over the g-th item group. From Fig. 3, we can see that recommender systems tend to recommend popular items, while leaving unpopular items less likely to be discovered, which further exacerbates the long-tail distribution. Also, our CGI can significantly improve the recommendation accuracy on long-tail items. Although both GCL methods CGI and SGL, show no superiority on the top 20% items, from the overall improvements in Table 1, we can see they can better capture the long-tail items’ information in user preference representations. Robustness to Interaction Noises To verify CGI’s robustness to interaction noises, we generate different proportions of negative interactions (i.e., 5%, 10%, 15%, and 20%) to contaminate the training set, and report the performance on the unchanged test set. Fig. 4 shows the NDCG@10 on Yelp2018 and Movielens-1M and the performance degradation ratio of the corresponding contaminated training set. It’s obvious that the more noise we add, the worse performance all the models yield, since all the models utilize LightGCN as the basic backbone, which fully relies on the adjacency matrix AG to refine the representations of users and items in Eq. 7. However, the performance degradation of our CGI is smaller than other models in both datasets. What’s more, the gaps between CGI and other models grow larger as the noise increase. This suggests that our CGI framework can mitigate the noise in interaction data more efficiently, and our learnable augmentation optimized by the IB contrastive learning exhibits good robustness in the presence of a high proportion of noise, which is consistent with our proof in Sect. 4.3. We can observe that CGI is more robust on Movielens-1M. This makes sense since Movielens-1M is much denser than Yelp2018 according to the statistics in the Appendix and thus the bipartite graph of Yelp2018 will be more sensitive to the added noise. Effectiveness of Information Bottleneck To investigate the effect of information bottleneck, we consider the following variants of CGI with different contrastive learning strategies, our complete methods (CGI), our method without introducing contrastive learning (GL), and our method that maximizes the correspondence among different views (i.e., min L̃rec − I(E; Ẽ)) (GCL). Fig. 5 shows the recommending training loss w.r.t. the number of training steps and the evaluation results on Yelp, from which we observe that the multi-view graph learning frameworks driven by contrastive learning are easier to converge. Specifically, when maximizing the mutual information among views, the GCL framework drops more quickly at the very beginning and turns to a steadily decreasing state afterward. However, with IB contrastive learning, the recommending loss of our CGI appears to have a declining trend after an initial sharp drop, instead of getting an early-stop, which is more likely to converge to a better local optimum. This is probably why CGI has better performance than both GL and GCL, as illustrated by the right part of Fig. 5. Also, we find that the multi-view graph learning can benefit more from the IB contrastive learning than the conventional one, since it can encourage to drop the noisy information irreverent for the recommendation as illustrated in Sect. 4.3. Performance with Other GNNs To verify the generalization of our method on other GNNs, we tried CGI and the baseline SGL on two other popular GNN-based recommenders GC-MC [24] and NGCF [30]. The experimental results are shown in Table 3. Both graph contrastive learning methods have shown improvements to the backbones. On NGCF, CGI shows consistent superiority compared to SGL. On GC-MC, CGI does not have significant improvement compared to SGL, probably due to the fact that GC-MC only utilizes one layer of GCN, which means it can only adopt 1-hop neighbors for learning, thus making the learnable augmentation challenging to fetch enough information. 6 Conclusions In this paper, we propose novel Contrastive Graph Structure Learning via Information Bottleneck (CGI) to learn better augmentation from different aspects for the multi-view representation learning of recommendation. In particular, we propose a fully differentiable learner to drop nodes and edges to construct different types of augmentation views coupled with the recommendation. We innovatively integrate information bottleneck into the multi-view contrastive learning process for recommendation and prove its efficiency. The extensive experiments conducted on three public datasets verify the effectiveness of CGI. Acknowledgments and Disclosure of Funding This work was supported by Alibaba Group through Alibaba Research Intern Program.
1. What is the focus and contribution of the paper on graph neural networks? 2. What are the strengths of the proposed approach, particularly in terms of adaptive dropping of edges and nodes, and contrastive learning? 3. What are the weaknesses of the paper, especially regarding theoretical links and limitations in considering real-world recommendation systems? 4. Do you have any concerns about the choice of GNN variants or the simplification of the problem in the dataset? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a graph neural network model based on LightGCN using learned parameters to adaptively dropout nodes and edges to obtain multi-view representation and contrastive learning with information bottleneck. Experiments on public datasets show improvements compared to existing models. Ablation studies show effectiveness of different components in the model. Strengths And Weaknesses Strength The paper explains the motivation behind different model components, e.g., adaptive dropping of edges and nodes to reduce popularity bias and contrastive learning via information bottleneck to reduce noise. The paper conducts thorough experiments using 3 public datasets and compared with multiple models. There are also ablation studies to verify different hypothesis in the model. The paper also provides detailed hyperparameters and time complexity analysis. Code was also provided for reproducing the results. Weakness While the paper provides empirical results showing that the model can reduce popularity bias and provide robustness to noise, there are no clear theoretical link provided. Questions The main GNN component is based on LightGCN, have you tried with other GNN variants? Do they work well? From the details of the dataset, it seems that the paper only considers a binary case for Movielens and Douban (1 vs 0) and this seems to simplify the problem. For real recommendation systems, there's a different between recommending an item that user would rate 5 vs 4 (assuming 5 is the highest rating). It would be worth trying to fully utilize the ratings information in these datasets. If the proposed model aims to address the popularity bias issue, why are items with fewer interactions removed (Yelp with <10, Movielens with <3, Douban with <5)? These are real data and should be considered in the model training and evaluation unless they are corrupt or spammy data points. Limitations The model assumes that the data is relatively clean (without much spam or misleading content/item). Real recommendation systems would need to consider filtering out spammy content, misleading and low quality content.
NIPS
Title Contrastive Graph Structure Learning via Information Bottleneck for Recommendation Abstract Graph convolution networks (GCNs) for recommendations have emerged as an important research topic due to their ability to exploit higher-order neighbors. Despite their success, most of them suffer from the popularity bias brought by a small number of active users and popular items. Also, a real-world user-item bipartite graph contains many noisy interactions, which may hamper the sensitive GCNs. Graph contrastive learning show promising performance for solving the above challenges in recommender systems. Most existing works typically perform graph augmentation to create multiple views of the original graph by randomly dropping edges/nodes or relying on predefined rules, and these augmented views always serve as an auxiliary task by maximizing their correspondence. However, we argue that the graph structures generated from these vanilla approaches may be suboptimal, and maximizing their correspondence will force the representation to capture information irrelevant for the recommendation task. Here, we propose a Contrastive Graph Structure Learning via Information Bottleneck (CGI) for recommendation, which adaptively learns whether to drop an edge or node to obtain optimized graph structures in an end-to-end manner. Moreover, we innovatively introduce the Information Bottleneck into the contrastive learning process to avoid capturing irrelevant information among different views and help enrich the final representation for recommendation. Extensive experiments on public datasets are provided to show that our model significantly outperforms strong baselines. 2 1 Introduction Recommender systems have been widely deployed to alleviate information overload in diverse scenarios including e-commerce, online news and multimedia contents, which requires high-quality user and item representations learned from the historical interactions [7, 14, 43]. Recently, thanks to the powerful capability in modeling graph-structured data, Graph Convolution Networks (GCNs) provide an efficient way to integrate multi-hop neighbors into node representation learning and show prominent performance in recommendation [37, 30, 8]. Although encouraging performance has been achieved, we argue that most GCN-based recommender models suffer from the following two limitations, of which the impacts on the user’s exhibited preference are presented in Fig. 1. i) Popularity Bias. Items inherently have different customer sizes, and this imbalance can potentially lead to popularity bias [45]. In most recommender systems, the customer size for items usually follows a long-tail distribution, which means a few items have massive customers while the majority have few customers. Similarly, most users have few interactions. This skewed data distribution will bias GCN-based models towards the popular users and items easily ∗Equal contributions from both authors. This work is done when Chunyu Wei works as an intern at Alibaba. 2The code is available on https://github.com/weicy15/CGI. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). during multi-hop convolution, which may hamper the representation learning. ii) Interaction Noises. User-item interactions usually contain noises especially in the scenarios with only implicit feedbacks (e.g., clicks and purchases). More specifically, these noisy edges in the bipartite graph are not necessarily aligned with user preferences [18], since it’s common that the user clicks something by mistake or finds something boring after purchasing. GCN-based models are known to be vulnerable to the quality of the input graphs [44], which means aggregating misleading neighborhood information is likely to lead to sub-optimal performance. Recent advances in graph contrastive learning [27, 38] have identified an effective training scheme for mitigating popularity bias and increasing robustness for noise on graph-based tasks, which inspire many studies [31, 41, 33] to introduce this training scheme to enhance representation learning for recommendations. Nevertheless, existing studies have two limitations. First, most methods perform data augmentation by randomly dropping edges/nodes to change the graph structure [31], shuffling the embeddings to corrupt the node representations [41], or relying on predefined rules [6]. However, within unsupervised settings, structures created from these vanilla approaches may be suboptimal for recommendation tasks and also lack persuasive rationales for why the randomly dropped edges/nodes alleviate the popularity bias and interaction noises. Like the obtained representation No.1 in Fig. 1, structures created from these vanilla approaches may deviate from the optimal area. Second, most methods generate multiple views only to serve as an auxiliary task by maximizing the agreement of node representations among these views, which may force the user or item representation in different views to capture the information irrelevant for the recommendation task. For example, the obtained representation No.1 in Fig. 1 contains much information irrelevant to the real preference. So we believe that a good augmentation (e.g., No.2 in Fig. 1) should cover as much optimal area as possible while being as small as possible to reduce useless information. To address the aforementioned limitations, we propose Contrastive Graph Structure Learning via Information Bottleneck (CGI) for recommendation, which contains two key components: learnable graph augmentation and information bottleneck contrastive learning. First, we propose learnable graph augmentation to learn whether to drop an edge or node to transform the original bipartite graph into correlated views, which will be jointly optimized with the downstream recommendation in an end-to-end fashion. As a result, these generated views can intentionally reduce the influence of popular nodes while preserving information of the isolated nodes, and thus help to mitigate the popularity bias. The intuition behind is that random dropout will indiscriminately drop nodes or edges regardless of the corresponding node degrees, while by message passing mechanism, GCNs are easier to reconstruct the missing information of popular users or items, but much harder to reconstruct those isolated nodes with few connections, thus may overemphasize those high-degree nodes. These generated views with debiased information are all fed into the GCN-based recommender for multi-view representation learning to increase the ability against popularity bias. Second, we proposed to integrate different views into a compact representation for the downstream recommendation tasks, which can further improve the robustness of the model. Generally, when information from different views complements each other, it can be expected that the multi-view representation learning approaches can improve downstream performance [28]. So we argue that simply maximizing the mutual information in the conventional graph contrastive learning may push the representations of different views to capture information irrelevant to the downstream task. Inspired by the recent advances of Information Bottleneck (IB) [32], which encourages the representation to capture the minimum sufficient information for the downstream task, we utilize the IB principle to minimize the mutual information between the original graph and the generated views while maintaining the downstream recommendation performance of each view. By doing so, the learnable graph augmenters can learn to remove noisy interactions in the original graph as much as possible, since these interactions are of no help for the downstream recommendation. Also, the IB principle helps representations of different views to capture collaborative information of different semantics complement to each other. The contributions of this paper are summarized as follows. (1) We propose the CGI to construct optimized graph structures by dropping nodes and edges adaptively for the multi-view representation learning of users and items, which provides rationales for alleviating the popularity bias. (2) To efficiently drop information irrelevant to the downstream recommendation, we innovatively integrate information bottleneck into the multi-view contrastive learning process for recommendation and prove that it can better mitigate interaction noises. (3) Experimental results show that our method outperforms the state-of-the-art methods on three benchmark datasets from different domains. 2 Related Work Graph-based Recommendation Early works exploiting the user-item bipartite graph for recommendation like ItemRank [3] usually followed the label propagation mechanism to propagate users’ preference over the graph, i.e., encouraging connected nodes to have similar labels. In recent years, Graph Convolution Networks (GCNs) have made great progress in representation learning tasks including node classification and link prediction [5, 12, 35]. Motivated by the strength of GCNs, several works [24, 8, 37, 30] have adapted GCNs on the user-item bipartite graph to learn more robust latent representations for users and items in recommender systems. Contrastive Learning Contrastive Learning (CL) [22, 9] was firstly proposed to train CNNs for image representation learning. Graph Contrastive Learning (GCL) applies the idea of CL on GNNs. DGI [27] and InfoGraph [19] learn node representations according to the mutual information between nodes and the whole graph. Peng et al. [15] developed an unsupervised learning model trained by maximizing mutual information of nodes between the input and output of a graph neural encoder. Hu et al. [10] extend the idea to build contrastive pairs between nodes and subgraphs. In addition, GCC [16] designs the pre-training task as subgraph instance discrimination in and across networks and leverage CL to empower GNNs. And a very recent work SGL [31] supplements the classical supervised task of recommendation with an auxiliary graph CL task, which generates multiple views of a node and maximizes the agreement between different views. However, it differs from our work in: (1) SGL [31] generates contrastive pairs by randomly dropping edges/nodes, while our work adopts a learnable augmenter to optimize the generated views. (2) SGL [31] utilizes conventional CL as an auxiliary task by maximizing the agreement of augmentation views, while we propose to encourage the differences between the augmentation views and the original graph. Learning by Information-Bottleneck Information Bottleneck (IB) [23] is an approach based on information theory, which states that if the obtained representation discards information from the input which is not useful for a given task, it will increase robustness for the downstream tasks. Besides, the information bottleneck principle is used in multi-view representation learning [34, 29, 2]. Formally, given the original data X with label Y, IB is to obtain a compact and effective representation Z of X. And the objective of the IB principle is as follows: max Z I(Y;Z)− βI(X;Z), (1) where β is the coefficient to balance the mutual information I(Y,Z) and I(X,Z). Recently, some works proposed to integrate the IB principle into the graph learning process. You et al. [39] propose a variational graph auto-encoder to generate contrastive views and the downstream contrastive learning utilizes IB performing on graph representations as the unsupervised loss. Both Yu et al. [40] and Yu et al. [42] aim to directly reveal the vital substructure in the subgraph level, among which [1] learns a node assignment matrix to extract the subgraph, and implements the IB of two graphs by estimating the KL-divergence from graph latent representation with a statistic network (DONSKER-VARADHAN Representation of KL-divergence). And Yu et al. [42] employ noise injection to manipulate the graph, and customizes the Gaussian prior for each input graph and the injected noise, so as to implement the IB of two graphs with a tractable variational upper bound. Our CGI differs from them, since we do not directly aim to find an optimal graph structure, instead we try to learn the graph structure complementing the original one. Then by integrating different views into a compact representation, we obtain the optimal node representation for the downstream task. Sun et al. [20] learn to mask node feature and generates new structure with the masked feature. Afterward, [20] adopt GNN to learn the distribution of graph representation and utilize the KL-divergence between the learned distribution and the prior distribution to implement the IB. All these methods aim to find a better structure or representation to replace the original graph for the downstream task, while our CGI follows a multi-view representation learning schema. IB is utilized to minimize the mutual information between the original graph and the generated views while maintaining the downstream recommendation performance of each view. Besides the noise-invariance property, IB helps representations of different views to capture collaborative information of different semantics that complement each other. AD-GCL [21] shares some ideas with our CGI but there are fundamental differences. Specifically, AD-GCL focuses on training self-supervised GNNs for graph-level tasks. In contrast, CGI aims to mitigate the popularity bias and interaction noises of node-level collaborative filtering (CF). In addition, AD-GCL adopts an adversarial strategy aiming to maximize the agreement of final representations of different views. Instead, our CGI minimizes the mutual information of different views to capture collaborative information of different semantics. To the best of our knowledge, this is the first study on leveraging the IB principle to enhance graph-based recommendations. 3 Preliminaries Problem Definition. Let U = {u1, u2, . . . , um} denotes the set of users, and let I = {i1, i2, . . . , in} denotes the set of items. We typically use a binary matrix R ∈ Rm×n to store user-item interactions (e.g., purchases and clicks), where rui = 1 indicates that user u consumed item i while rui = 0 means that item i is unexposed to user u or user u is not interested in item i. Following most existing works [30, 8], we represent interaction data as a user-item bipartite graph G = {V, E}, where the node set V = U ∪ I and the edge set E = {eui|rui = 1, u ∈ U , i ∈ I}. The adjacency matrix AG can be formulated as follows: AG = [ 0 R RT 0 ] . (2) With respect to the adjacency matrix AG , the degree matrix DG ∈ N(m+n)×(m+n) is a diagonal matrix, in which each entry DG [i, i] denotes the number of nonzero entries in the i-th row of AG . GCN Paradigm. The core of graph convolution on graph G is to update the ego node by aggregating the representations of its neighbor nodes, which can be formulated as follows: E(l) = GCN(E(l−1),G), (3) where E(l−1) is the current representations of nodes and E(l) is the updated representations after the graph convolution layer. E(0) is the initial inputs, which are usually the ID embeddings (trainable parameters). From the vector level, Eq. 3 can be interpreted as: e(l)u = f (l) combine(e (l−1) u , f (l) aggregate({e (l) i |i ∈ Nu})), (4) e (l) i = f (l) combine(e (l−1) i , f (l) aggregate({e(l)u |u ∈ Ni})), (5) where Nu and Ni are the neighbor node set of user u and item i, respectively. There are many works designing different fcombine and faggregate [5, 26, 35]. Usually, there will be readout function to generate the final representations for the recommendation task: e = freadout({e(l)|l = 0, 1, . . . , L}). (6) For example, freadout can be concatenation [30], weighted sum [8] and retaining the last output [24]. LightGCN Brief. In this paper, we implement our CGI on the simple but effective GCN-based recommendation model LightGCN. It adopts weighted sum aggregators and abandon the use of feature transformation and nonlinear activation, of which the matrix form can be formulated as: E(l) = (D − 12 G AGD − 12 G )E (l−1), l ∈ N+, (7) where E(l−1) = [E(l−1)u ,E (l−1) i ] is the output of the previous LightGCN layer or the initial E (0). At last, LightGCN implement the freadout by weighted sum, in which the weight of each layer is set as 1 L+1 following the original work. After obtaining the representations of users and items, the inner product r̂ui = eTuei is used to predict preference score, which is commonly adopted in most recommender system: LightGCN employ the Bayesian Personalized Ranking (BPR) loss [17] to optimize the model parameters: Lrec =∑ (u,i,j)∈O −lnσ(r̂ui − r̂uj), where O = {(u, i, j)|(u, i) ∈ R+, (u, j) ∈ R−} is the pairwise training data, in which R+ denotes the observed interactions, and R− denotes the unobserved interactions. In this work, we also choose it as the objective function for the recommendation task. 4 Methodology The framework of CGI is illustrated in Fig. 2 and we detail the inference in Appendix. 4.1 Learnable Multi-View Augmentation Most of GCN-based recommendation like LightGCN [8] fully relies on the adjacency matrix AG to refine the representations of users and items in Eq. 7. However, AG may contain many biased and noisy information as discussed in Sec. 1, which continue to propagate misleading information as the LightGCN goes deeper. On the other hand, the vanilla randomly dropout in most contrastive learning for recommendation cannot create powerful views to alleviate popularity bias and interaction noises. We hence utilize parameterized networks to generate the layer-wise optimized augmentation views. Specifically, we assign different graph convolution layers with different learned subgraphs coupled with the downstream recommendation and thus obtain multi-view user and item representations. We elaborate on two types of learnable augmentations as follows. Node-Dropping View As illustrated in Sect. 1, popular users or items in the graph may skew the data distribution and thus hinder the GCN-based recommender. So we perform learnable node dropping at each layer to mask those the influential nodes and create the Node-Dropping view, which can be formulated as: G(l)ND = {{vi ⊙ ρ (l) i | vi ∈ V}, E}, (8) where ρ(l)i ∈ {0, 1} is drawn from a Bernoulli distribution parameterized by ω (l) i , i.e., ρ (l) i ∼ Bern(ω (l) i ), which denotes whether to keep the node vi. Simply removing the selected node alongside all its connections will cause a dramatic change of the bipartite graph structure thus exerting influence on the information aggregation and making the training unstable. Thus instead of removing the selected node, we replace the selected node v with its local subgraph’s representation to obscure its original representation and retain its corresponding edges. For node v, we perform random walk on the bipartite graph G with its walk length setting as k, then we take the mean pooling of sampled nodes as v’s local subgraph’s representation. Edge-Dropping View The goal of the Edge-Dropping view is to generate a subgraph filtering out noisy edges and intentionally decreasing the influence of popular nodes for GCN layers. Similarly to the Node-Dropping view, we create the Edge-Dropping view by learnable edge dropping: G(l)ED = {V, {eij ⊙ ρ (l) ij | eij ∈ E}}, (9) where ρ(l)ij ∈ {0, 1} also follows ρ (l) ij ∼ Bern(ω (l) ij ) and denotes whether the edge eij is present. Following [26], we adopt multi-layer perceptrons (MLPs) to the parameter ω(l)i and ω (l) ij that control the whether to mask node vi and edge eij , respectively, which can be formulated as: ω (l) i = MLP (e (l) i ); ω (l) ij = MLP ([e (l) i ; e (l) j ]). (10) To efficiently optimize the multi-view structure learning in an end-to-end manner, we adopt the reparameterization trick [11] and relax the above binary entries ρ from being drawn from Bernoulli distribution to a deterministic function of parameter ω and an independent random variable ϵ, which can be formulate as: ρ = σ((log ϵ− log (1− ϵ) + ω)/τ), (11) where ϵ ∼ Uniform(0, 1), τ ∈ R+ indicates the temperature and σ(·) is the sigmoid function. With τ > 0, the function is smoothed with a well-defined gradient ∂ρ∂ω , enabling efficient optimization of the learnable establishment of Node-Dropping view and Edge-Dropping view during training. In inference, we drop the node or edge with a probability of less than 0.5. Afterwards, we perform GCNs to obtain the representation of users and items on these views: E (l) ND = GCN(E (l−1) ND ,G (l) ND), E (l) ED = GCN(E (l−1) ED ,G (l) ED), (12) where the initial E(0)ND = E (0) ED = E (0). After stacking L LightGCN layers, we also adopt the weighted sum to construct their final representation END and EED, respectively. For simplicity, we omit the augmentation type ND and ED in the symbols below, and use Ẽ to denote the representations of these augmentation views. 4.2 Information Bottleneck Contrastive Learning Although we couple the learnable augmentation process and the recommendation process together, we find relying solely on the recommendation objective can not well guide the dropout process to create optimal augmentation views. Thus we adopt the Information-Bottleneck principle to retain the minimum sufficient information in each view for the downstream recommendation. Specifically, different from conventional contrastive learning, we instead encourage the divergence between the representations of the augmentation view and the original graph while maximizing the information relevant to the recommendation task. By doing so, we can obtain comprehensive multiview representation and efficiently drop noisy collaborative information for the recommendation. Accordingly, the objective in Eq. 1 is induced as: min (E,Ẽ) L̃rec + I(E; Ẽ), (13) where L̃rec is the BPR loss of the representation from the augmentation view and I(E, Ẽ) represents the mutual information between representations from two corresponding views. According to [25, 19], minimizing the InfoNCE loss [4] is equivalence to maximizing the lower bound of the corresponding mutual information. So we adopt negative InfoNCE to estimate the mutual information between the representations of the augmentation view and the original graph, which consists of mutual information from both the user side and item side. Formally, for the user side mutual information, we consider the representations of the same users in the augmentation view and the original graph as the positive pairs (i.e., {(ei, ẽi) | vi ∈ U}), while representations of two different users in the augmentation view and the original graph as the negative pairs (i.e., {(ei, ẽj) | vi, vj ∈ U , i ̸= j}): I(Eu; Ẽu) = ∑ vi∈U log exp(s(ei, ẽi)/τ ′)∑ vj∈U exp(s(ei, ẽj)/τ ′) , (14) where s(·) measures the similarity between two vectors, which is set as cosine similarity function; τ ′ is the hyper-parameter indicating the temperature similar to Eq. 11. Analogously, we can obtain the mutual information from item side I(Ei; Ẽi) and the overall mutual information can be obtained by combining mutual information from two sides: I(E; Ẽ) = I(Eu; Ẽu) + I(Ei; Ẽi). 4.3 Optimization To obtain comprehensive multi-view representations, we utilize two parameterized networks to learn to create the Node-Dropping view and the Edge-Dropping view simultaneously. In order to integrally explore both views for better recommendation, we jointly optimize the recommendation tasks of these views and the self-supervised IB contrastive learning: L = Lrec + LNDrec + LEDrec + λ(I(E,END) + I(E,EED)) + β∥Θ∥22, (15) where LNDrec and LNBrec are the recommendation objective of the Node-Dropping view and EdgeDropping view respectively. The last term is an L2 regularization. λ and β are the hyper-parameters controlling the effect strength of the IB contrastive learning task and L2 regularization, respectively. Proposition 1. Formally, we denote the learned augmentation view as G̃, the noisy graph structure as G′, and the downstream recommendation information as YRec. Suppose G′ is irrelevant to YRec, the mutual information I(G′; G̃) is upper bounded by I(G; G̃)− I(YRec; G̃): I(G′; G̃) ≤ I(G; G̃)− I(YRec; G̃). (16) Proof. Following the Markov chain assumption in [1], we suppose G is defined by Y and G′. And we can define the following Markov chain (YRec,G′) → G → G̃. According to the Data Processing Inequality, we have: I(G; G̃) ≥ I((YRec,G′); G̃) = I(G′; G̃) + I(YRec; G̃|G′) = I(G′; G̃) +H(YRec|G′)−H(YRec|G′; G̃). (17) Since G′ and YRec are independent, we have H(YRec|G′) = H(YRec). Also, it’s straightforward that H(YRec|G′; G̃) ≤ H(YRec|G̃). Thus we can simplify Eq. 17 as follow: I(G; G̃) ≥ I(G′; G̃) +H(YRec)−H(YRec|G̃) = I(G′; G̃) + I(YRec; G̃). (18) Thus we obtain that I(G′; G̃) ≤ I(G; G̃)− I(YRec; G̃), where I(YRec; G̃) is inverse proportional to the L̃rec in Eq. 13. Eq. 16 proves that optimizing the IB contrastive objective in Eq. 13 is equivalent to minimizing the mutual information between the learned augmentation view and noisy structure. Specifically, it provides theoretical guarantees that the IB contrastive learning leads to the noiseinvariance property by compressing the information in both the augmentation views. Meanwhile, the IB contrastive objective also restricts the augmentation view to be predictive for the recommendation task, which can intentionally reduce the influence of popular nodes while preserving information of the isolated nodes, and thus help to mitigate the popularity bias. 5 Experiments 5.1 Experimental Setup Dataset Description Three public available datasets are employed in our experiments, i.e., Yelp2018, MovieLens-1M and Douban. The detailed description can be found in the Appendix. For each dataset, we randomly select 80% of the historical interactions of each user as the training set, 10% of those as the validation set, and the remaining 10% as the test set. Evaluation metrics To evaluate the performance of all methods, we adopt a ranking-based metric namely Normalized Discounted Cumulative Gain@k (NDCG@k) and a relevancy-based metric Hit Ratio@k (RECALL@k). The formulations of the two metrics are in the Appendix. As suggested by Krichene and Rendle [13], we perform item ranking on all the candidate items instead of the sampled item sets to calculate above metrics, which guarantees that the evaluation process is unbiased. Compared Methods We compare our CGI with three classes of baseline methods: (1) MFbased methods, i.e., BPRMF [17] and NCF [7], (2) GNNs-based methods, i.e., NGCF [30] and LightGCN [8], and (3) CL-based methods, i.e., DNN+SSL [36] and SGL [31]. We give a detailed introduction to these baselines in the Appendix. Note that DNN+SSL applies augmentation on items’ feature which is not applicable in our case. So following [31], we apply the augmentations on ID embeddings of items instead. Hyper-parameter We initialize the latent vectors of both users and items with small random values for all models. The parameters for baseline methods are initialized as in the original papers, and are then carefully tuned to achieve optimal performances. For a fair comparison, the dimensions of both the user and item embeddings are all fixed to 64. We use Adam with β1 = 0.9, β2 = 0.999, ϵ = 1e−8 to optimize all these methods. The batch size is set to 2048. The learning rate is set as 0.005 and decayed at the rate of 0.9 every five epochs. We set λ = 0.02 and β = 0.01 for the coefficients in Eq. 15. More details about hyper-parameter settings of baselines can be found in the Appendix. 5.2 Performance Comparisons We summarize the performance of different algorithms in terms of NDCG@k and RECALL@k (k = 10, 20) over three datasets in Table 1. The experimental results demonstrate that CGI outperforms other methods on all evaluation metrics. We conduct the significant test and p-values < 0.05 indicates that the improvements of our CGI are statistically significant. Besides, we observe that the GNNs-based methods perform better than the MF-based models. These results verify that exploiting higher-order connectivity in the user-item bipartite graph is essential to improve the recommendation performance. This may also be the reason why the performance of DNN+SSL is inferior to those of SGL and our CGI when all applying contrastive learning. We can see that the CL-based graph learning methods, including our CGI, consistently outperform the GNNsbased models, which verifies the effectiveness of contrastive learning for representation learning. Besides, our CGI outperforms SGL by a large margin. The results demonstrate that compared with randomly dropping in SGL, the learnable graph augmentations optimized by information bottleneck can create optimal augmentation views and capture more comprehensive collaborative signals. 5.3 Ablation Studies Effectiveness of Learnable Augmentation To understand the respective effects of both the nodedropping and edge-dropping in learnable augmentation, we conduct ablation studies on Yelp2018 and Movielens-1M. As shown in Table 2, we report NDCG@10 and RECALL@10 of CGI and SGL in different versions. Specifically, CGIND and CGIED denote CGI with only node-dropping view and edge-dropping view being adopted, respectively. SGLND and SGLED denotes the augmentation view in SGL is created by random node dropout and edge dropout, respectively. We find that: (1) Our CGI achieves obvious improvements compared with SGL in different types of augmentation, which again verifies the effectiveness of the learnable graph augmentation optimized by information bottleneck. (2) CGI performs better in both CGI-ND and CGI-ED. We ascribe these to the ability of multi-view learning, which enables the final representation to capture collaborative information of different semantics and thus enhances the robustness and expressiveness of the model. Yelp2018Model NDCG@10 RECALL@10 LightGCN 0.0344 0.0530 CGI 0.0392 0.0584 SGL-ND 0.0356 0.0544 CGI-ND 0.0369 0.0569 SGL-ED 0.0367 0.0552 CGI-ED 0.0379 0.0579 MovieLens-1MModel NDCG@10 RECALL@10 LightGCN 0.1696 0.1865 CGI 0.1979 0.2180 SGL-ND 0.1765 0.1948 CGI-ND 0.1934 0.2119 SGL-ED 0.1800 0.1965 CGI-ED 0.1916 0.2088 Table 2: Comparison among models. 1 2 3 4 5 Item Group 0.01 0.02 0.03 0.04 0.05 Re ca ll Yelp2018 model LightGCN SGL CGI 1 2 3 4 5 Item Group 0.02 0.04 0.06 0.08 0.10 0.12 Re ca ll Movielens-1M model LightGCN SGL CGI Figure 3: Performance of different item groups (3) The performance of CGI-ED is better than that of CGI-ND in the sparse dataset Yelp2018, while worse in the dense dataset Movielens-1M. We can speculate that the interaction noises are more significant in the sparse dataset with less useful information, in which CGI-ND is not so flexible. Because it will remove all influence (i.e., edges) of popular nodes, which is hard to be restored with scarce interactions. But in the dense dataset, popularity bias becomes more significant, which makes CGI-ND more efficient by blocking the influence from popular users or items. Accuracy against Popularity Bias To verify whether CGI is capable of mitigating popularity bias, We split the item set I into 5 groups (1-5) evenly based on their popularity. The larger the GroupID is, the larger degrees the items have. Following [31], we decompose the RECALL@10 metric of the whole dataset into the contributions of the above ten groups of items: RECALL(g) = ∑k i=1 rel (g) i |Iutest| , (19) where rel(g)i = 1 denotes the item at the rank i is in the test set and g-th item group at the same time. As such, RECALL(g) measures the performance over the g-th item group. From Fig. 3, we can see that recommender systems tend to recommend popular items, while leaving unpopular items less likely to be discovered, which further exacerbates the long-tail distribution. Also, our CGI can significantly improve the recommendation accuracy on long-tail items. Although both GCL methods CGI and SGL, show no superiority on the top 20% items, from the overall improvements in Table 1, we can see they can better capture the long-tail items’ information in user preference representations. Robustness to Interaction Noises To verify CGI’s robustness to interaction noises, we generate different proportions of negative interactions (i.e., 5%, 10%, 15%, and 20%) to contaminate the training set, and report the performance on the unchanged test set. Fig. 4 shows the NDCG@10 on Yelp2018 and Movielens-1M and the performance degradation ratio of the corresponding contaminated training set. It’s obvious that the more noise we add, the worse performance all the models yield, since all the models utilize LightGCN as the basic backbone, which fully relies on the adjacency matrix AG to refine the representations of users and items in Eq. 7. However, the performance degradation of our CGI is smaller than other models in both datasets. What’s more, the gaps between CGI and other models grow larger as the noise increase. This suggests that our CGI framework can mitigate the noise in interaction data more efficiently, and our learnable augmentation optimized by the IB contrastive learning exhibits good robustness in the presence of a high proportion of noise, which is consistent with our proof in Sect. 4.3. We can observe that CGI is more robust on Movielens-1M. This makes sense since Movielens-1M is much denser than Yelp2018 according to the statistics in the Appendix and thus the bipartite graph of Yelp2018 will be more sensitive to the added noise. Effectiveness of Information Bottleneck To investigate the effect of information bottleneck, we consider the following variants of CGI with different contrastive learning strategies, our complete methods (CGI), our method without introducing contrastive learning (GL), and our method that maximizes the correspondence among different views (i.e., min L̃rec − I(E; Ẽ)) (GCL). Fig. 5 shows the recommending training loss w.r.t. the number of training steps and the evaluation results on Yelp, from which we observe that the multi-view graph learning frameworks driven by contrastive learning are easier to converge. Specifically, when maximizing the mutual information among views, the GCL framework drops more quickly at the very beginning and turns to a steadily decreasing state afterward. However, with IB contrastive learning, the recommending loss of our CGI appears to have a declining trend after an initial sharp drop, instead of getting an early-stop, which is more likely to converge to a better local optimum. This is probably why CGI has better performance than both GL and GCL, as illustrated by the right part of Fig. 5. Also, we find that the multi-view graph learning can benefit more from the IB contrastive learning than the conventional one, since it can encourage to drop the noisy information irreverent for the recommendation as illustrated in Sect. 4.3. Performance with Other GNNs To verify the generalization of our method on other GNNs, we tried CGI and the baseline SGL on two other popular GNN-based recommenders GC-MC [24] and NGCF [30]. The experimental results are shown in Table 3. Both graph contrastive learning methods have shown improvements to the backbones. On NGCF, CGI shows consistent superiority compared to SGL. On GC-MC, CGI does not have significant improvement compared to SGL, probably due to the fact that GC-MC only utilizes one layer of GCN, which means it can only adopt 1-hop neighbors for learning, thus making the learnable augmentation challenging to fetch enough information. 6 Conclusions In this paper, we propose novel Contrastive Graph Structure Learning via Information Bottleneck (CGI) to learn better augmentation from different aspects for the multi-view representation learning of recommendation. In particular, we propose a fully differentiable learner to drop nodes and edges to construct different types of augmentation views coupled with the recommendation. We innovatively integrate information bottleneck into the multi-view contrastive learning process for recommendation and prove its efficiency. The extensive experiments conducted on three public datasets verify the effectiveness of CGI. Acknowledgments and Disclosure of Funding This work was supported by Alibaba Group through Alibaba Research Intern Program.
1. What is the main contribution of the paper, and how does it address popularity biases and noisy interactions in recommendation systems? 2. What are the strengths of the proposed model, particularly in its design and extensive studies? 3. What are the weaknesses of the paper, especially regarding related works and technical novelty? 4. Do you have any questions or suggestions regarding the experimental results, such as requesting standard deviation and the number of actual runs? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes CGI, a contrastive model for item-user bipartite graphs in recommendation systems. The CGI consists of two parts: learnable graph structure augmentation and information bottleneck losses. Each of these modules reduces the influence of popularity biases and noisy interactions. The experiments on three real-world recommendation tasks demonstrate that CGI outperforms baselines, especially for unpopular items and noisy settings. Strengths And Weaknesses Strengths This paper deals with an important problem in recommendation systems. The model design aligns with the motivation, and extensive studies under various scenarios are well-presented. The authors design experiments for sub-problems of this particular domain (popularity biases and interaction noises) and present them in an appropriate format. I like the Effectiveness of Information Bottleneck part (line 340) since this is an essential analysis to support the model design choice of MI minimization rather than maximization. Weaknesses Although I have decided to rate this paper as borderline accept, there are two flaws in this paper. One is a minor problem that can be solved by simple revision, but the other can be critical. Even if a rejection decision is made, I will not object to it. First, some related works of information bottleneck models on graphs are missing. Yu, Junchi, et al. "Graph Information Bottleneck for Subgraph Recognition." International Conference on Learning Representations. 2020. You, Yuning, et al. "Bringing your own view: Graph contrastive learning without prefabricated data augmentations." Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining. 2022. Sun, Q., et al. "Graph Structure Learning with Variational Information Bottleneck." Proceedings of the AAAI Conference on Artificial Intelligence. 2022. Yu, Junchi, Jie Cao, and Ran He. "Improving subgraph recognition with variational graph information bottleneck." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. (Yes, I know this paper is very recently published, but it was on arXiv more than three months ago. Moreover, it is deeply related to the authors' work. See the 'Second' section below) Second, the CGI model might be a straightforward combination of existing approaches but for a specific domain. The technical novelty is somewhat limited. Learnable masking of nodes or edges with reparameterization is a well-known approach for learning graph structures, and using it as augmentations have been proposed recently (as the authors say). The authors use information bottleneck losses in the same form as the original GIB paper (and other variants for graph tasks) with similar motivations. I believe this is an appropriate choice for this problem, but it is difficult to call it novel. In addition, the theoretical result (proposition 1) is a simple application of Proposition 3.1 in [1] and is very similar to Lemma 4.1 in [2]. [1] Achille, Alessandro, and Stefano Soatto. "Emergence of invariance and disentanglement in deep representations." The Journal of Machine Learning Research 19.1 (2018): 1947-1980. [2] Yu, Junchi, Jie Cao, and Ran He. "Improving subgraph recognition with variational graph information bottleneck." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. Questions It is nice to see the statistical significance tests in the result tables. Could you write the standard deviation and the number of actual runs in the experiments? How about changing the name from CGI to CGIB? Surveying existing works, models that end with ‘I’ are conventionally InfoMax models (DGI, HDGI, GMI, GCI, …) and models that end with ‘IB’ are usually Information bottleneck models (GIB, HGIB, VIB-GSL, VGIB, …). Limitations The authors fairly discussed limitations.
NIPS
Title Contrastive Graph Structure Learning via Information Bottleneck for Recommendation Abstract Graph convolution networks (GCNs) for recommendations have emerged as an important research topic due to their ability to exploit higher-order neighbors. Despite their success, most of them suffer from the popularity bias brought by a small number of active users and popular items. Also, a real-world user-item bipartite graph contains many noisy interactions, which may hamper the sensitive GCNs. Graph contrastive learning show promising performance for solving the above challenges in recommender systems. Most existing works typically perform graph augmentation to create multiple views of the original graph by randomly dropping edges/nodes or relying on predefined rules, and these augmented views always serve as an auxiliary task by maximizing their correspondence. However, we argue that the graph structures generated from these vanilla approaches may be suboptimal, and maximizing their correspondence will force the representation to capture information irrelevant for the recommendation task. Here, we propose a Contrastive Graph Structure Learning via Information Bottleneck (CGI) for recommendation, which adaptively learns whether to drop an edge or node to obtain optimized graph structures in an end-to-end manner. Moreover, we innovatively introduce the Information Bottleneck into the contrastive learning process to avoid capturing irrelevant information among different views and help enrich the final representation for recommendation. Extensive experiments on public datasets are provided to show that our model significantly outperforms strong baselines. 2 1 Introduction Recommender systems have been widely deployed to alleviate information overload in diverse scenarios including e-commerce, online news and multimedia contents, which requires high-quality user and item representations learned from the historical interactions [7, 14, 43]. Recently, thanks to the powerful capability in modeling graph-structured data, Graph Convolution Networks (GCNs) provide an efficient way to integrate multi-hop neighbors into node representation learning and show prominent performance in recommendation [37, 30, 8]. Although encouraging performance has been achieved, we argue that most GCN-based recommender models suffer from the following two limitations, of which the impacts on the user’s exhibited preference are presented in Fig. 1. i) Popularity Bias. Items inherently have different customer sizes, and this imbalance can potentially lead to popularity bias [45]. In most recommender systems, the customer size for items usually follows a long-tail distribution, which means a few items have massive customers while the majority have few customers. Similarly, most users have few interactions. This skewed data distribution will bias GCN-based models towards the popular users and items easily ∗Equal contributions from both authors. This work is done when Chunyu Wei works as an intern at Alibaba. 2The code is available on https://github.com/weicy15/CGI. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). during multi-hop convolution, which may hamper the representation learning. ii) Interaction Noises. User-item interactions usually contain noises especially in the scenarios with only implicit feedbacks (e.g., clicks and purchases). More specifically, these noisy edges in the bipartite graph are not necessarily aligned with user preferences [18], since it’s common that the user clicks something by mistake or finds something boring after purchasing. GCN-based models are known to be vulnerable to the quality of the input graphs [44], which means aggregating misleading neighborhood information is likely to lead to sub-optimal performance. Recent advances in graph contrastive learning [27, 38] have identified an effective training scheme for mitigating popularity bias and increasing robustness for noise on graph-based tasks, which inspire many studies [31, 41, 33] to introduce this training scheme to enhance representation learning for recommendations. Nevertheless, existing studies have two limitations. First, most methods perform data augmentation by randomly dropping edges/nodes to change the graph structure [31], shuffling the embeddings to corrupt the node representations [41], or relying on predefined rules [6]. However, within unsupervised settings, structures created from these vanilla approaches may be suboptimal for recommendation tasks and also lack persuasive rationales for why the randomly dropped edges/nodes alleviate the popularity bias and interaction noises. Like the obtained representation No.1 in Fig. 1, structures created from these vanilla approaches may deviate from the optimal area. Second, most methods generate multiple views only to serve as an auxiliary task by maximizing the agreement of node representations among these views, which may force the user or item representation in different views to capture the information irrelevant for the recommendation task. For example, the obtained representation No.1 in Fig. 1 contains much information irrelevant to the real preference. So we believe that a good augmentation (e.g., No.2 in Fig. 1) should cover as much optimal area as possible while being as small as possible to reduce useless information. To address the aforementioned limitations, we propose Contrastive Graph Structure Learning via Information Bottleneck (CGI) for recommendation, which contains two key components: learnable graph augmentation and information bottleneck contrastive learning. First, we propose learnable graph augmentation to learn whether to drop an edge or node to transform the original bipartite graph into correlated views, which will be jointly optimized with the downstream recommendation in an end-to-end fashion. As a result, these generated views can intentionally reduce the influence of popular nodes while preserving information of the isolated nodes, and thus help to mitigate the popularity bias. The intuition behind is that random dropout will indiscriminately drop nodes or edges regardless of the corresponding node degrees, while by message passing mechanism, GCNs are easier to reconstruct the missing information of popular users or items, but much harder to reconstruct those isolated nodes with few connections, thus may overemphasize those high-degree nodes. These generated views with debiased information are all fed into the GCN-based recommender for multi-view representation learning to increase the ability against popularity bias. Second, we proposed to integrate different views into a compact representation for the downstream recommendation tasks, which can further improve the robustness of the model. Generally, when information from different views complements each other, it can be expected that the multi-view representation learning approaches can improve downstream performance [28]. So we argue that simply maximizing the mutual information in the conventional graph contrastive learning may push the representations of different views to capture information irrelevant to the downstream task. Inspired by the recent advances of Information Bottleneck (IB) [32], which encourages the representation to capture the minimum sufficient information for the downstream task, we utilize the IB principle to minimize the mutual information between the original graph and the generated views while maintaining the downstream recommendation performance of each view. By doing so, the learnable graph augmenters can learn to remove noisy interactions in the original graph as much as possible, since these interactions are of no help for the downstream recommendation. Also, the IB principle helps representations of different views to capture collaborative information of different semantics complement to each other. The contributions of this paper are summarized as follows. (1) We propose the CGI to construct optimized graph structures by dropping nodes and edges adaptively for the multi-view representation learning of users and items, which provides rationales for alleviating the popularity bias. (2) To efficiently drop information irrelevant to the downstream recommendation, we innovatively integrate information bottleneck into the multi-view contrastive learning process for recommendation and prove that it can better mitigate interaction noises. (3) Experimental results show that our method outperforms the state-of-the-art methods on three benchmark datasets from different domains. 2 Related Work Graph-based Recommendation Early works exploiting the user-item bipartite graph for recommendation like ItemRank [3] usually followed the label propagation mechanism to propagate users’ preference over the graph, i.e., encouraging connected nodes to have similar labels. In recent years, Graph Convolution Networks (GCNs) have made great progress in representation learning tasks including node classification and link prediction [5, 12, 35]. Motivated by the strength of GCNs, several works [24, 8, 37, 30] have adapted GCNs on the user-item bipartite graph to learn more robust latent representations for users and items in recommender systems. Contrastive Learning Contrastive Learning (CL) [22, 9] was firstly proposed to train CNNs for image representation learning. Graph Contrastive Learning (GCL) applies the idea of CL on GNNs. DGI [27] and InfoGraph [19] learn node representations according to the mutual information between nodes and the whole graph. Peng et al. [15] developed an unsupervised learning model trained by maximizing mutual information of nodes between the input and output of a graph neural encoder. Hu et al. [10] extend the idea to build contrastive pairs between nodes and subgraphs. In addition, GCC [16] designs the pre-training task as subgraph instance discrimination in and across networks and leverage CL to empower GNNs. And a very recent work SGL [31] supplements the classical supervised task of recommendation with an auxiliary graph CL task, which generates multiple views of a node and maximizes the agreement between different views. However, it differs from our work in: (1) SGL [31] generates contrastive pairs by randomly dropping edges/nodes, while our work adopts a learnable augmenter to optimize the generated views. (2) SGL [31] utilizes conventional CL as an auxiliary task by maximizing the agreement of augmentation views, while we propose to encourage the differences between the augmentation views and the original graph. Learning by Information-Bottleneck Information Bottleneck (IB) [23] is an approach based on information theory, which states that if the obtained representation discards information from the input which is not useful for a given task, it will increase robustness for the downstream tasks. Besides, the information bottleneck principle is used in multi-view representation learning [34, 29, 2]. Formally, given the original data X with label Y, IB is to obtain a compact and effective representation Z of X. And the objective of the IB principle is as follows: max Z I(Y;Z)− βI(X;Z), (1) where β is the coefficient to balance the mutual information I(Y,Z) and I(X,Z). Recently, some works proposed to integrate the IB principle into the graph learning process. You et al. [39] propose a variational graph auto-encoder to generate contrastive views and the downstream contrastive learning utilizes IB performing on graph representations as the unsupervised loss. Both Yu et al. [40] and Yu et al. [42] aim to directly reveal the vital substructure in the subgraph level, among which [1] learns a node assignment matrix to extract the subgraph, and implements the IB of two graphs by estimating the KL-divergence from graph latent representation with a statistic network (DONSKER-VARADHAN Representation of KL-divergence). And Yu et al. [42] employ noise injection to manipulate the graph, and customizes the Gaussian prior for each input graph and the injected noise, so as to implement the IB of two graphs with a tractable variational upper bound. Our CGI differs from them, since we do not directly aim to find an optimal graph structure, instead we try to learn the graph structure complementing the original one. Then by integrating different views into a compact representation, we obtain the optimal node representation for the downstream task. Sun et al. [20] learn to mask node feature and generates new structure with the masked feature. Afterward, [20] adopt GNN to learn the distribution of graph representation and utilize the KL-divergence between the learned distribution and the prior distribution to implement the IB. All these methods aim to find a better structure or representation to replace the original graph for the downstream task, while our CGI follows a multi-view representation learning schema. IB is utilized to minimize the mutual information between the original graph and the generated views while maintaining the downstream recommendation performance of each view. Besides the noise-invariance property, IB helps representations of different views to capture collaborative information of different semantics that complement each other. AD-GCL [21] shares some ideas with our CGI but there are fundamental differences. Specifically, AD-GCL focuses on training self-supervised GNNs for graph-level tasks. In contrast, CGI aims to mitigate the popularity bias and interaction noises of node-level collaborative filtering (CF). In addition, AD-GCL adopts an adversarial strategy aiming to maximize the agreement of final representations of different views. Instead, our CGI minimizes the mutual information of different views to capture collaborative information of different semantics. To the best of our knowledge, this is the first study on leveraging the IB principle to enhance graph-based recommendations. 3 Preliminaries Problem Definition. Let U = {u1, u2, . . . , um} denotes the set of users, and let I = {i1, i2, . . . , in} denotes the set of items. We typically use a binary matrix R ∈ Rm×n to store user-item interactions (e.g., purchases and clicks), where rui = 1 indicates that user u consumed item i while rui = 0 means that item i is unexposed to user u or user u is not interested in item i. Following most existing works [30, 8], we represent interaction data as a user-item bipartite graph G = {V, E}, where the node set V = U ∪ I and the edge set E = {eui|rui = 1, u ∈ U , i ∈ I}. The adjacency matrix AG can be formulated as follows: AG = [ 0 R RT 0 ] . (2) With respect to the adjacency matrix AG , the degree matrix DG ∈ N(m+n)×(m+n) is a diagonal matrix, in which each entry DG [i, i] denotes the number of nonzero entries in the i-th row of AG . GCN Paradigm. The core of graph convolution on graph G is to update the ego node by aggregating the representations of its neighbor nodes, which can be formulated as follows: E(l) = GCN(E(l−1),G), (3) where E(l−1) is the current representations of nodes and E(l) is the updated representations after the graph convolution layer. E(0) is the initial inputs, which are usually the ID embeddings (trainable parameters). From the vector level, Eq. 3 can be interpreted as: e(l)u = f (l) combine(e (l−1) u , f (l) aggregate({e (l) i |i ∈ Nu})), (4) e (l) i = f (l) combine(e (l−1) i , f (l) aggregate({e(l)u |u ∈ Ni})), (5) where Nu and Ni are the neighbor node set of user u and item i, respectively. There are many works designing different fcombine and faggregate [5, 26, 35]. Usually, there will be readout function to generate the final representations for the recommendation task: e = freadout({e(l)|l = 0, 1, . . . , L}). (6) For example, freadout can be concatenation [30], weighted sum [8] and retaining the last output [24]. LightGCN Brief. In this paper, we implement our CGI on the simple but effective GCN-based recommendation model LightGCN. It adopts weighted sum aggregators and abandon the use of feature transformation and nonlinear activation, of which the matrix form can be formulated as: E(l) = (D − 12 G AGD − 12 G )E (l−1), l ∈ N+, (7) where E(l−1) = [E(l−1)u ,E (l−1) i ] is the output of the previous LightGCN layer or the initial E (0). At last, LightGCN implement the freadout by weighted sum, in which the weight of each layer is set as 1 L+1 following the original work. After obtaining the representations of users and items, the inner product r̂ui = eTuei is used to predict preference score, which is commonly adopted in most recommender system: LightGCN employ the Bayesian Personalized Ranking (BPR) loss [17] to optimize the model parameters: Lrec =∑ (u,i,j)∈O −lnσ(r̂ui − r̂uj), where O = {(u, i, j)|(u, i) ∈ R+, (u, j) ∈ R−} is the pairwise training data, in which R+ denotes the observed interactions, and R− denotes the unobserved interactions. In this work, we also choose it as the objective function for the recommendation task. 4 Methodology The framework of CGI is illustrated in Fig. 2 and we detail the inference in Appendix. 4.1 Learnable Multi-View Augmentation Most of GCN-based recommendation like LightGCN [8] fully relies on the adjacency matrix AG to refine the representations of users and items in Eq. 7. However, AG may contain many biased and noisy information as discussed in Sec. 1, which continue to propagate misleading information as the LightGCN goes deeper. On the other hand, the vanilla randomly dropout in most contrastive learning for recommendation cannot create powerful views to alleviate popularity bias and interaction noises. We hence utilize parameterized networks to generate the layer-wise optimized augmentation views. Specifically, we assign different graph convolution layers with different learned subgraphs coupled with the downstream recommendation and thus obtain multi-view user and item representations. We elaborate on two types of learnable augmentations as follows. Node-Dropping View As illustrated in Sect. 1, popular users or items in the graph may skew the data distribution and thus hinder the GCN-based recommender. So we perform learnable node dropping at each layer to mask those the influential nodes and create the Node-Dropping view, which can be formulated as: G(l)ND = {{vi ⊙ ρ (l) i | vi ∈ V}, E}, (8) where ρ(l)i ∈ {0, 1} is drawn from a Bernoulli distribution parameterized by ω (l) i , i.e., ρ (l) i ∼ Bern(ω (l) i ), which denotes whether to keep the node vi. Simply removing the selected node alongside all its connections will cause a dramatic change of the bipartite graph structure thus exerting influence on the information aggregation and making the training unstable. Thus instead of removing the selected node, we replace the selected node v with its local subgraph’s representation to obscure its original representation and retain its corresponding edges. For node v, we perform random walk on the bipartite graph G with its walk length setting as k, then we take the mean pooling of sampled nodes as v’s local subgraph’s representation. Edge-Dropping View The goal of the Edge-Dropping view is to generate a subgraph filtering out noisy edges and intentionally decreasing the influence of popular nodes for GCN layers. Similarly to the Node-Dropping view, we create the Edge-Dropping view by learnable edge dropping: G(l)ED = {V, {eij ⊙ ρ (l) ij | eij ∈ E}}, (9) where ρ(l)ij ∈ {0, 1} also follows ρ (l) ij ∼ Bern(ω (l) ij ) and denotes whether the edge eij is present. Following [26], we adopt multi-layer perceptrons (MLPs) to the parameter ω(l)i and ω (l) ij that control the whether to mask node vi and edge eij , respectively, which can be formulated as: ω (l) i = MLP (e (l) i ); ω (l) ij = MLP ([e (l) i ; e (l) j ]). (10) To efficiently optimize the multi-view structure learning in an end-to-end manner, we adopt the reparameterization trick [11] and relax the above binary entries ρ from being drawn from Bernoulli distribution to a deterministic function of parameter ω and an independent random variable ϵ, which can be formulate as: ρ = σ((log ϵ− log (1− ϵ) + ω)/τ), (11) where ϵ ∼ Uniform(0, 1), τ ∈ R+ indicates the temperature and σ(·) is the sigmoid function. With τ > 0, the function is smoothed with a well-defined gradient ∂ρ∂ω , enabling efficient optimization of the learnable establishment of Node-Dropping view and Edge-Dropping view during training. In inference, we drop the node or edge with a probability of less than 0.5. Afterwards, we perform GCNs to obtain the representation of users and items on these views: E (l) ND = GCN(E (l−1) ND ,G (l) ND), E (l) ED = GCN(E (l−1) ED ,G (l) ED), (12) where the initial E(0)ND = E (0) ED = E (0). After stacking L LightGCN layers, we also adopt the weighted sum to construct their final representation END and EED, respectively. For simplicity, we omit the augmentation type ND and ED in the symbols below, and use Ẽ to denote the representations of these augmentation views. 4.2 Information Bottleneck Contrastive Learning Although we couple the learnable augmentation process and the recommendation process together, we find relying solely on the recommendation objective can not well guide the dropout process to create optimal augmentation views. Thus we adopt the Information-Bottleneck principle to retain the minimum sufficient information in each view for the downstream recommendation. Specifically, different from conventional contrastive learning, we instead encourage the divergence between the representations of the augmentation view and the original graph while maximizing the information relevant to the recommendation task. By doing so, we can obtain comprehensive multiview representation and efficiently drop noisy collaborative information for the recommendation. Accordingly, the objective in Eq. 1 is induced as: min (E,Ẽ) L̃rec + I(E; Ẽ), (13) where L̃rec is the BPR loss of the representation from the augmentation view and I(E, Ẽ) represents the mutual information between representations from two corresponding views. According to [25, 19], minimizing the InfoNCE loss [4] is equivalence to maximizing the lower bound of the corresponding mutual information. So we adopt negative InfoNCE to estimate the mutual information between the representations of the augmentation view and the original graph, which consists of mutual information from both the user side and item side. Formally, for the user side mutual information, we consider the representations of the same users in the augmentation view and the original graph as the positive pairs (i.e., {(ei, ẽi) | vi ∈ U}), while representations of two different users in the augmentation view and the original graph as the negative pairs (i.e., {(ei, ẽj) | vi, vj ∈ U , i ̸= j}): I(Eu; Ẽu) = ∑ vi∈U log exp(s(ei, ẽi)/τ ′)∑ vj∈U exp(s(ei, ẽj)/τ ′) , (14) where s(·) measures the similarity between two vectors, which is set as cosine similarity function; τ ′ is the hyper-parameter indicating the temperature similar to Eq. 11. Analogously, we can obtain the mutual information from item side I(Ei; Ẽi) and the overall mutual information can be obtained by combining mutual information from two sides: I(E; Ẽ) = I(Eu; Ẽu) + I(Ei; Ẽi). 4.3 Optimization To obtain comprehensive multi-view representations, we utilize two parameterized networks to learn to create the Node-Dropping view and the Edge-Dropping view simultaneously. In order to integrally explore both views for better recommendation, we jointly optimize the recommendation tasks of these views and the self-supervised IB contrastive learning: L = Lrec + LNDrec + LEDrec + λ(I(E,END) + I(E,EED)) + β∥Θ∥22, (15) where LNDrec and LNBrec are the recommendation objective of the Node-Dropping view and EdgeDropping view respectively. The last term is an L2 regularization. λ and β are the hyper-parameters controlling the effect strength of the IB contrastive learning task and L2 regularization, respectively. Proposition 1. Formally, we denote the learned augmentation view as G̃, the noisy graph structure as G′, and the downstream recommendation information as YRec. Suppose G′ is irrelevant to YRec, the mutual information I(G′; G̃) is upper bounded by I(G; G̃)− I(YRec; G̃): I(G′; G̃) ≤ I(G; G̃)− I(YRec; G̃). (16) Proof. Following the Markov chain assumption in [1], we suppose G is defined by Y and G′. And we can define the following Markov chain (YRec,G′) → G → G̃. According to the Data Processing Inequality, we have: I(G; G̃) ≥ I((YRec,G′); G̃) = I(G′; G̃) + I(YRec; G̃|G′) = I(G′; G̃) +H(YRec|G′)−H(YRec|G′; G̃). (17) Since G′ and YRec are independent, we have H(YRec|G′) = H(YRec). Also, it’s straightforward that H(YRec|G′; G̃) ≤ H(YRec|G̃). Thus we can simplify Eq. 17 as follow: I(G; G̃) ≥ I(G′; G̃) +H(YRec)−H(YRec|G̃) = I(G′; G̃) + I(YRec; G̃). (18) Thus we obtain that I(G′; G̃) ≤ I(G; G̃)− I(YRec; G̃), where I(YRec; G̃) is inverse proportional to the L̃rec in Eq. 13. Eq. 16 proves that optimizing the IB contrastive objective in Eq. 13 is equivalent to minimizing the mutual information between the learned augmentation view and noisy structure. Specifically, it provides theoretical guarantees that the IB contrastive learning leads to the noiseinvariance property by compressing the information in both the augmentation views. Meanwhile, the IB contrastive objective also restricts the augmentation view to be predictive for the recommendation task, which can intentionally reduce the influence of popular nodes while preserving information of the isolated nodes, and thus help to mitigate the popularity bias. 5 Experiments 5.1 Experimental Setup Dataset Description Three public available datasets are employed in our experiments, i.e., Yelp2018, MovieLens-1M and Douban. The detailed description can be found in the Appendix. For each dataset, we randomly select 80% of the historical interactions of each user as the training set, 10% of those as the validation set, and the remaining 10% as the test set. Evaluation metrics To evaluate the performance of all methods, we adopt a ranking-based metric namely Normalized Discounted Cumulative Gain@k (NDCG@k) and a relevancy-based metric Hit Ratio@k (RECALL@k). The formulations of the two metrics are in the Appendix. As suggested by Krichene and Rendle [13], we perform item ranking on all the candidate items instead of the sampled item sets to calculate above metrics, which guarantees that the evaluation process is unbiased. Compared Methods We compare our CGI with three classes of baseline methods: (1) MFbased methods, i.e., BPRMF [17] and NCF [7], (2) GNNs-based methods, i.e., NGCF [30] and LightGCN [8], and (3) CL-based methods, i.e., DNN+SSL [36] and SGL [31]. We give a detailed introduction to these baselines in the Appendix. Note that DNN+SSL applies augmentation on items’ feature which is not applicable in our case. So following [31], we apply the augmentations on ID embeddings of items instead. Hyper-parameter We initialize the latent vectors of both users and items with small random values for all models. The parameters for baseline methods are initialized as in the original papers, and are then carefully tuned to achieve optimal performances. For a fair comparison, the dimensions of both the user and item embeddings are all fixed to 64. We use Adam with β1 = 0.9, β2 = 0.999, ϵ = 1e−8 to optimize all these methods. The batch size is set to 2048. The learning rate is set as 0.005 and decayed at the rate of 0.9 every five epochs. We set λ = 0.02 and β = 0.01 for the coefficients in Eq. 15. More details about hyper-parameter settings of baselines can be found in the Appendix. 5.2 Performance Comparisons We summarize the performance of different algorithms in terms of NDCG@k and RECALL@k (k = 10, 20) over three datasets in Table 1. The experimental results demonstrate that CGI outperforms other methods on all evaluation metrics. We conduct the significant test and p-values < 0.05 indicates that the improvements of our CGI are statistically significant. Besides, we observe that the GNNs-based methods perform better than the MF-based models. These results verify that exploiting higher-order connectivity in the user-item bipartite graph is essential to improve the recommendation performance. This may also be the reason why the performance of DNN+SSL is inferior to those of SGL and our CGI when all applying contrastive learning. We can see that the CL-based graph learning methods, including our CGI, consistently outperform the GNNsbased models, which verifies the effectiveness of contrastive learning for representation learning. Besides, our CGI outperforms SGL by a large margin. The results demonstrate that compared with randomly dropping in SGL, the learnable graph augmentations optimized by information bottleneck can create optimal augmentation views and capture more comprehensive collaborative signals. 5.3 Ablation Studies Effectiveness of Learnable Augmentation To understand the respective effects of both the nodedropping and edge-dropping in learnable augmentation, we conduct ablation studies on Yelp2018 and Movielens-1M. As shown in Table 2, we report NDCG@10 and RECALL@10 of CGI and SGL in different versions. Specifically, CGIND and CGIED denote CGI with only node-dropping view and edge-dropping view being adopted, respectively. SGLND and SGLED denotes the augmentation view in SGL is created by random node dropout and edge dropout, respectively. We find that: (1) Our CGI achieves obvious improvements compared with SGL in different types of augmentation, which again verifies the effectiveness of the learnable graph augmentation optimized by information bottleneck. (2) CGI performs better in both CGI-ND and CGI-ED. We ascribe these to the ability of multi-view learning, which enables the final representation to capture collaborative information of different semantics and thus enhances the robustness and expressiveness of the model. Yelp2018Model NDCG@10 RECALL@10 LightGCN 0.0344 0.0530 CGI 0.0392 0.0584 SGL-ND 0.0356 0.0544 CGI-ND 0.0369 0.0569 SGL-ED 0.0367 0.0552 CGI-ED 0.0379 0.0579 MovieLens-1MModel NDCG@10 RECALL@10 LightGCN 0.1696 0.1865 CGI 0.1979 0.2180 SGL-ND 0.1765 0.1948 CGI-ND 0.1934 0.2119 SGL-ED 0.1800 0.1965 CGI-ED 0.1916 0.2088 Table 2: Comparison among models. 1 2 3 4 5 Item Group 0.01 0.02 0.03 0.04 0.05 Re ca ll Yelp2018 model LightGCN SGL CGI 1 2 3 4 5 Item Group 0.02 0.04 0.06 0.08 0.10 0.12 Re ca ll Movielens-1M model LightGCN SGL CGI Figure 3: Performance of different item groups (3) The performance of CGI-ED is better than that of CGI-ND in the sparse dataset Yelp2018, while worse in the dense dataset Movielens-1M. We can speculate that the interaction noises are more significant in the sparse dataset with less useful information, in which CGI-ND is not so flexible. Because it will remove all influence (i.e., edges) of popular nodes, which is hard to be restored with scarce interactions. But in the dense dataset, popularity bias becomes more significant, which makes CGI-ND more efficient by blocking the influence from popular users or items. Accuracy against Popularity Bias To verify whether CGI is capable of mitigating popularity bias, We split the item set I into 5 groups (1-5) evenly based on their popularity. The larger the GroupID is, the larger degrees the items have. Following [31], we decompose the RECALL@10 metric of the whole dataset into the contributions of the above ten groups of items: RECALL(g) = ∑k i=1 rel (g) i |Iutest| , (19) where rel(g)i = 1 denotes the item at the rank i is in the test set and g-th item group at the same time. As such, RECALL(g) measures the performance over the g-th item group. From Fig. 3, we can see that recommender systems tend to recommend popular items, while leaving unpopular items less likely to be discovered, which further exacerbates the long-tail distribution. Also, our CGI can significantly improve the recommendation accuracy on long-tail items. Although both GCL methods CGI and SGL, show no superiority on the top 20% items, from the overall improvements in Table 1, we can see they can better capture the long-tail items’ information in user preference representations. Robustness to Interaction Noises To verify CGI’s robustness to interaction noises, we generate different proportions of negative interactions (i.e., 5%, 10%, 15%, and 20%) to contaminate the training set, and report the performance on the unchanged test set. Fig. 4 shows the NDCG@10 on Yelp2018 and Movielens-1M and the performance degradation ratio of the corresponding contaminated training set. It’s obvious that the more noise we add, the worse performance all the models yield, since all the models utilize LightGCN as the basic backbone, which fully relies on the adjacency matrix AG to refine the representations of users and items in Eq. 7. However, the performance degradation of our CGI is smaller than other models in both datasets. What’s more, the gaps between CGI and other models grow larger as the noise increase. This suggests that our CGI framework can mitigate the noise in interaction data more efficiently, and our learnable augmentation optimized by the IB contrastive learning exhibits good robustness in the presence of a high proportion of noise, which is consistent with our proof in Sect. 4.3. We can observe that CGI is more robust on Movielens-1M. This makes sense since Movielens-1M is much denser than Yelp2018 according to the statistics in the Appendix and thus the bipartite graph of Yelp2018 will be more sensitive to the added noise. Effectiveness of Information Bottleneck To investigate the effect of information bottleneck, we consider the following variants of CGI with different contrastive learning strategies, our complete methods (CGI), our method without introducing contrastive learning (GL), and our method that maximizes the correspondence among different views (i.e., min L̃rec − I(E; Ẽ)) (GCL). Fig. 5 shows the recommending training loss w.r.t. the number of training steps and the evaluation results on Yelp, from which we observe that the multi-view graph learning frameworks driven by contrastive learning are easier to converge. Specifically, when maximizing the mutual information among views, the GCL framework drops more quickly at the very beginning and turns to a steadily decreasing state afterward. However, with IB contrastive learning, the recommending loss of our CGI appears to have a declining trend after an initial sharp drop, instead of getting an early-stop, which is more likely to converge to a better local optimum. This is probably why CGI has better performance than both GL and GCL, as illustrated by the right part of Fig. 5. Also, we find that the multi-view graph learning can benefit more from the IB contrastive learning than the conventional one, since it can encourage to drop the noisy information irreverent for the recommendation as illustrated in Sect. 4.3. Performance with Other GNNs To verify the generalization of our method on other GNNs, we tried CGI and the baseline SGL on two other popular GNN-based recommenders GC-MC [24] and NGCF [30]. The experimental results are shown in Table 3. Both graph contrastive learning methods have shown improvements to the backbones. On NGCF, CGI shows consistent superiority compared to SGL. On GC-MC, CGI does not have significant improvement compared to SGL, probably due to the fact that GC-MC only utilizes one layer of GCN, which means it can only adopt 1-hop neighbors for learning, thus making the learnable augmentation challenging to fetch enough information. 6 Conclusions In this paper, we propose novel Contrastive Graph Structure Learning via Information Bottleneck (CGI) to learn better augmentation from different aspects for the multi-view representation learning of recommendation. In particular, we propose a fully differentiable learner to drop nodes and edges to construct different types of augmentation views coupled with the recommendation. We innovatively integrate information bottleneck into the multi-view contrastive learning process for recommendation and prove its efficiency. The extensive experiments conducted on three public datasets verify the effectiveness of CGI. Acknowledgments and Disclosure of Funding This work was supported by Alibaba Group through Alibaba Research Intern Program.
1. What is the focus and contribution of the paper on graph neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its motivation and performance compared to recent state-of-the-art methods? 3. Do you have any concerns or questions regarding the methodology or the experimental framework? 4. Are there any limitations or potential drawbacks of the proposed approach that the authors did not discuss? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a new deep learning architecture for graphs, named CGI. CGI aims at build a vector space for users and items that can be used for recommander systems. The main novelty lies in the way two views are built (one for the nodes, one for the edges), by using data augmentation with perturbation (for dealing with high-degree nodes) and Information Bottleneck framework to make the representation space more robust to noise. CGI is favorably compared to several recent models on three different datasets with a proper experimental framework (good metrics, ablation studies, etc.). I don't see any import flaw in the methodology. All in all, it seems to be as a good, rational contribution to the field. Strengths And Weaknesses Strengths: new interesting model with good motivations good results when compared to recent sota Weaknesses: I don't see any important weakness. Questions Is it possible to design a more accurate baseline by mitigating the weight of node centrality (like a normalization) before using solutions from the literature? Section 4.3 is not perfectly clear to me. I don't understand to what extent Proposition 1 is a real new contribution (that can be used for other research, and in what context), or if it's a rather straightforward derivation. Limitations I don't see any important limitation. I've not given the maximal score because it's really interesting but not fundamentally novel (like, "groundbreaking"). However, I'm bothered by the fact the authors don't point out any possible limitation... and I cannot believe that. From my opinion, a paper like this should tell when the model isn't fit to the problem and what may be the possible extensions. It's maybe a problem of the (possible) computation cost of optimization, or when the two views (node vs. edge) disagree?
NIPS
Title The Reversible Residual Network: Backpropagation Without Storing Activations Abstract Deep residual networks (ResNets) have significantly pushed forward the state-ofthe-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual Network (RevNet), a variant of ResNets where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. We demonstrate the effectiveness of RevNets on CIFAR-10, CIFAR-100, and ImageNet, establishing nearly identical classification accuracy to equally-sized ResNets, even though the activation storage requirements are independent of depth. 1 Introduction Over the last five years, deep convolutional neural networks have enabled rapid performance improvements across a wide range of visual processing tasks [19, 26, 20]. For the most part, the state-of-the-art networks have been growing deeper. For instance, deep residual networks (ResNets) [13] are the state-of-the-art architecture across multiple computer vision tasks [19, 26, 20]. The key architectural innovation behind ResNets was the residual block, which allows information to be passed directly through, making the backpropagated error signals less prone to exploding or vanishing. This made it possible to train networks with hundreds of layers, and this vastly increased depth led to significant performance gains. Nearly all modern neural networks are trained using backpropagation. Since backpropagation requires storing the network’s activations in memory, the memory cost is proportional to the number of units in the network. Unfortunately, this means that as networks grow wider and deeper, storing the activations imposes an increasing memory burden, which has become a bottleneck for many applications [34, 37]. Graphics processing units (GPUs) have limited memory capacity, leading to constraints often exceeded by state-of-the-art architectures, some of which reach over one thousand layers [13]. Training large networks may require parallelization across multiple GPUs [7, 28], which is both expensive and complicated to implement. Due to memory constraints, modern architectures are often trained with a mini-batch size of 1 (e.g. [34, 37]), which is inefficient for stochastic gradient methods [11]. Reducing the memory cost of storing activations would significantly improve our ability to efficiently train wider and deeper networks. ∗These authors contributed equally. Code available at https://github.com/renmengye/revnet-public 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. We present Reversible Residual Networks (RevNets), a variant of ResNets which is reversible in the sense that each layer’s activations can be computed from the subsequent reversible layer’s activations. This enables us to perform backpropagation without storing the activations in memory, with the exception of a handful of non-reversible layers. The result is a network architecture whose activation storage requirements are independent of depth, and typically at least an order of magnitude smaller compared with equally sized ResNets. Surprisingly, constraining the architecture to be reversible incurs no noticeable loss in performance: in our experiments, RevNets achieved nearly identical classification accuracy to standard ResNets on CIFAR-10, CIFAR-100, and ImageNet, with only a modest increase in the training time. 2 Background 2.1 Backpropagation Backpropagation [25] is a classic algorithm for computing the gradient of a cost function with respect to the parameters of a neural network. It is used in nearly all neural network algorithms, and is now taken for granted in light of neural network frameworks which implement automatic differentiation [1, 2]. Because achieving the memory savings of our method requires manual implementation of part of the backprop computations, we briefly review the algorithm. We treat backprop as an instance of reverse mode automatic differentiation [24]. Let v1, . . . , vK denote a topological ordering of the nodes in the network’s computation graph G, where vK denotes the cost function C. Each node is defined as a function fi of its parents in G. Backprop computes the total derivative dC/dvi for each node in the computation graph. This total derivative defines the the effect on C of an infinitesimal change to vi, taking into account the indirect effects through the descendants of vk in the computation graph. Note that the total derivative is distinct from the partial derivative ∂f/∂xi of a function f with respect to one of its arguments xi, which does not take into account the effect of changes to xi on the other arguments. To avoid using a small typographical difference to represent a significant conceptual difference, we will denote total derivatives using vi = dC/dvi. Backprop iterates over the nodes in the computation graph in reverse topological order. For each node vi, it computes the total derivative vi using the following rule: vi = ∑ j∈Child(i) ( ∂fj ∂vi )> vj , (1) where Child(i) denotes the children of node vi in G and ∂fj/∂vi denotes the Jacobian matrix. 2.2 Deep Residual Networks One of the main difficulties in training very deep networks is the problem of exploding and vanishing gradients, first observed in the context of recurrent neural networks [3]. In particular, because a deep network is a composition of many nonlinear functions, the dependencies across distant layers can be highly complex, making the gradient computations unstable. Highway networks [29] circumvented this problem by introducing skip connections. Similarly, deep residual networks (ResNets) [13] use a functional form which allows information to pass directly through the network, thereby keeping the computations stable. ResNets currently represent the state-of-the-art in object recognition [13], semantic segmentation [35] and image generation [32]. Outside of vision, residuals have displayed impressive performance in audio generation [31] and neural machine translation [16], ResNets are built out of modules called residual blocks, which have the following form: y = x+ F(x), (2) where F , a function called the residual function, is typically a shallow neural net. ResNets are robust to exploding and vanishing gradients because each residual block is able to pass signals directly through, allowing the signals to be propagated faithfully across many layers. As displayed in Figure 1, residual functions for image recognition generally consist of stacked batch normalization ("BN") [14], rectified linear activation ("ReLU") [23] and convolution layers (with filters of shape three "C3" and one "C1"). As in He et al. [13], we use two residual block architectures: the basic residual function (Figure 1 right-top) and the bottleneck residual function (Figure 1 right-bottom). The bottleneck residual consists of three convolutions, the first is a point-wise convolution which reduces the dimensionality of the feature dimension, the second is a standard convolution with filter size 3, and the final point-wise convolution projects into the desired output feature depth. a(x) = ReLU(BN(x))) ck(x) = Convk×k(a(x)) Basic(x) = c3(c3(x)) Bottleneck(x) = c1(c3(c1(x))) (3) 2.3 Reversible Architectures Various reversible neural net architectures have been proposed, though for motivations distinct from our own. Deco and Brauer [8] develop a similar reversible architecture to ensure the preservation of information in unsupervised learning contexts. The proposed architecture is indeed residual and constructed to produce a lower triangular Jacobian matrix with ones along the diagonal. In Deco and Brauer [8], the residual connections are composed of all ‘prior’ neurons in the layer, while NICE and our own architecture segments a layer into pairs of neurons and additively connect one with a residual function of the other. Maclaurin et al. [21] made use of the reversible nature of stochastic gradient descent to tune hyperparameters via gradient descent. Our proposed method is inspired by nonlinear independent components estimation (NICE) [9, 10], an approach to unsupervised generative modeling. NICE is based on learning a non-linear bijective transformation between the data space and a latent space. The architecture is composed of a series of blocks defined as follows, where x1 and x2 are a partition of the units in each layer: y1 = x1 y2 = x2 + F(x1) (4) Because the model is invertible and its Jacobian has unit determinant, the log-likelihood and its gradients can be tractably computed. This architecture imposes some constraints on the functions the network can represent; for instance, it can only represent volume-preserving mappings. Follow-up work by Dinh et al. [10] addressed this limitation by introducing a new reversible transformation: y1 = x1 y2 = x2 exp(F(x1)) + G(x1). (5) Here, represents the Hadamard or element-wise product. This transformation has a non-unit Jacobian determinant due to multiplication by exp (F(x1)). 3 Methods We now introduce Reversible Residual Networks (RevNets), a variant of Residual Networks which is reversible in the sense that each layer’s activations can be computed from the next layer’s activations. We discuss how to reconstruct the activations online during backprop, eliminating the need to store the activations in memory. 3.1 Reversible Residual Networks RevNets are composed of a series of reversible blocks, which we now define. We must partition the units in each layer into two groups, denoted x1 and x2; for the remainder of the paper, we assume this is done by partitioning the channels, since we found this to work the best in our experiments.2 Each reversible block takes inputs (x1, x2) and produces outputs (y1, y2) according to the following additive coupling rules – inspired by NICE’s [9] transformation in Equation 4 – and residual functions F and G analogous to those in standard ResNets: y1 = x1 + F(x2) y2 = x2 + G(y1) (6) Each layer’s activations can be reconstructed from the next layer’s activations as follows: x2 = y2 − G(y1) x1 = y1 −F(x2) (7) Note that unlike residual blocks, reversible blocks must have a stride of 1 because otherwise the layer discards information, and therefore cannot be reversible. Standard ResNet architectures typically have a handful of layers with a larger stride. If we define a RevNet architecture analogously, the activations must be stored explicitly for all non-reversible layers. 3.2 Backpropagation Without Storing Activations To derive the backprop procedure, it is helpful to rewrite the forward (left) and reverse (right) computations in the following way: z1 = x1 + F(x2) z1 = y1 y2 = x2 + G(z1) x2 = y2 − G(z1) (8) y1 = z1 x1 = z1 −F(x2) Even though z1 = y1, the two variables represent distinct nodes of the computation graph, so the total derivatives z1 and y1 are different. In particular, z1 includes the indirect effect through y2, while y1 does not. This splitting lets us implement the forward and backward passes for reversible blocks in a modular fashion. In the backwards pass, we are given the activations (y1, y2) and their total derivatives (y1, y2) and wish to compute the inputs (x1, x2), their total derivatives (x1, x2), and the total derivatives for any parameters associated with F and G. (See Section 2.1 for our backprop 2The possibilities we explored included columns, checkerboard, rows and channels, as done by [10]. We found that performance was consistently superior using the channel-wise partitioning scheme and comparable across the remaining options. We note that channel-wise partitioning has also been explored in the context of multi-GPU training via ’grouped’ convolutions [18], and more recently, convolutional neural networks have seen significant success by way of ’separable’ convolutions [27, 6]. notation.) We do this by combining the reconstruction formulas (Eqn. 8) with the backprop rule (Eqn. 1). The resulting algorithm is given as Algorithm 1.3 By applying Algorithm 1 repeatedly, one can perform backprop on a sequence of reversible blocks if one is given simply the activations and their derivatives for the top layer in the sequence. In general, a practical architecture would likely also include non-reversible layers, such as subsampling layers; the inputs to these layers would need to be stored explicitly during backprop. However, a typical ResNet architecture involves long sequences of residual blocks and only a handful of subsampling layers; if we mirror the architecture of a ResNet, there would be only a handful of non-reversible layers, and the number would not grow with the depth of the network. In this case, the storage cost of the activations would be small, and independent of the depth of the network. Computational overhead. In general, for a network with N connections, the forward and backward passes of backprop require approximately N and 2N add-multiply operations, respectively. For a RevNet, the residual functions each must be recomputed during the backward pass. Therefore, the number of operations required for reversible backprop is approximately 4N , or roughly 33% more than ordinary backprop. (This is the same as the overhead introduced by checkpointing [22].) In practice, we have found the forward and backward passes to be about equally expensive on GPU architectures; if this is the case, then the computational overhead of RevNets is closer to 50%. Algorithm 1 Reversible Residual Block Backprop 1: function BLOCKREVERSE((y1, y2), (y1, y2)) 2: z1 ← y1 3: x2 ← y2 − G(z1) 4: x1 ← z1 −F(x2) 5: z1 ← y1 + ( ∂G ∂z1 )> y2 . ordinary backprop 6: x2 ← y2 + ( ∂F ∂x2 )> z1 . ordinary backprop 7: x1 ← z1 8: wF ← ( ∂F ∂wF )> z1 . ordinary backprop 9: wG ← ( ∂G ∂wG )> y2 . ordinary backprop 10: return (x1, x2) and (x1, x2) and (wF , wG) 11: end function Modularity. Note that Algorithm 1 is agnostic to the form of the residual functions F and G. The steps which use the Jacobians of these functions are implemented in terms of ordinary backprop, which can be achieved by calling automatic differentiation routines (e.g. tf.gradients or Theano.grad). Therefore, even though implementing our algorithm requires some amount of manual implementation of backprop, one does not need to modify the implementation in order to change the residual functions. Numerical error. While Eqn. 8 reconstructs the activations exactly when done in exact arithmetic, practical float32 implementations may accumulate numerical error during backprop. We study the effect of numerical error in Section 5.2; while the error is noticeable in our experiments, it does not significantly affect final performance. We note that if numerical error becomes a significant issue, one could use fixed-point arithmetic on the x’s and y’s (but ordinary floating point to compute F and G), analogously to [21]. In principle, this would enable exact reconstruction while introducing little overhead, since the computation of the residual functions and their derivatives (which dominate the computational cost) would be unchanged. 4 Related Work A number of steps have been taken towards reducing the storage requirements of extremely deep neural networks. Much of this work has focused on the modification of memory allocation within the training algorithms themselves [1, 2]. Checkpointing [22, 5, 12] is one well-known technique which 3We assume for notational clarity that the residual functions do not share parameters, but Algorithm 1 can be trivially extended to a network with weight sharing, such as a recurrent neural net. trades off spatial and temporal complexity; during backprop, one stores a subset of the activations (called checkpoints) and recomputes the remaining activations as required. Martens and Sutskever [22] adopted this technique in the context of training recurrent neural networks on a sequence of length T using backpropagation through time [33], storing every d √ T e layers and recomputing the intermediate activations between each during the backward pass. Chen et al. [5] later proposed to recursively apply this strategy on the sub-graph between checkpoints. Gruslys et al. [12] extended this approach by applying dynamic programming to determine a storage strategy which minimizes the computational cost for a given memory budget. To analyze the computational and memory complexity of these alternatives, assume for simplicity a feed-forward network consisting of L identical layers. Again, for simplicity, assume the units are chosen such that the cost of forward propagation or backpropagation through a single layer is 1, and the memory cost of storing a single layer’s activations is 1. In this case, ordinary backpropagation has computational cost 2L and storage cost L for the activations. The method of Martens and Sutskever [22] requres 2 √ L storage, and it demands an additional forward computation for each layer, leading to a total computational cost of 3L. The recursive algorithm of Chen et al. [5] reduces the required memory to O(logL), while increasing the computational cost to O(L logL). In comparison to these, our method incurs O(1) storage cost — as only a single block must be stored — and computational cost of 3L. The time and space complexities of these methods are summarized in Table 1. Another approach to saving memory is to replace backprop itself. The decoupled neural interface [15] updates each weight matrix using a gradient approximation, termed the synthetic gradient, computed based on only the node’s activations instead of the global network error. This removes any long-range gradient computation dependencies in the computation graph, leading to O(1) activation storage requirements. However, these savings are achieved only after the synthetic gradient estimators have been trained; that training requires all the activations to be stored. 5 Experiments We experimented with RevNets on three standard image classification benchmarks: CIFAR-10, CIFAR-100, [17] and ImageNet [26]. In order to make our results directly comparable with standard ResNets, we tried to match both the computational depth and the number of parameters as closely as possible. We observed that each reversible block has a computation depth of two original residual blocks. Therefore, we reduced the total number of residual blocks by approximately half, while approximately doubling the number of channels per block, since they are partitioned into two. Table 2 shows the details of the RevNets and their corresponding traditional ResNet. In all of our experiments, we were interested in whether our RevNet architectures (which are far more memory efficient) were able to match the classification accuracy of ResNets of the same size. 5.1 Implementation We implemented the RevNets using the TensorFlow library [1]. We manually make calls to TensorFlow’s automatic differentiation method (i.e. tf.gradients) to construct the backward-pass computation graph without referencing activations computed in the forward pass. While building the backward graph, we reconstruct the input activations (x̂1, x̂2) for each block (Equation 8); Second, we apply tf.stop_gradient on the reconstructed inputs to prevent auto-diff from traversing into the reconstructions’ computation graph, then call the forward functions again to compute (ŷ1, ŷ2) (Equation 8). Lastly, we use auto-diff to traverse from (ŷ1, ŷ2) to (x̂1, x̂2) and the parameters (wF , wG). This implementation leverages the convenience of the auto-diff functionality to avoid manually deriving gradients; however the computational cost becomes 5N , compared with 4N for Algorithm 1, and 3N for ordinary backpropagation (see Section 3.2). The full theoretical efficiency can be realized by reusing the F and G graphs’ activations that were computed in the reconstruction steps (lines 3 and 4 of Algorithm 1). 5.2 RevNet performance Our ResNet implementation roughly matches the previously reported classification error rates [13]. As shown in Table 3, our RevNets roughly matched the error rates of traditional ResNets (of roughly equal computational depth and number of parameters) on CIFAR-10 & 100 as well as ImageNet (Table 4). In no condition did the RevNet underperform the ResNet by more than 0.5%, and in some cases, RevNets achieved slightly better performance. Furthermore, Figure 3 compares ImageNet training curves of the ResNet and RevNet architectures; reversibility did not lead to any noticeable per-iteration slowdown in training. (As discussed above, each RevNet update is about 1.5-2× more expensive, depending on the implementation.) We found it surprising that the performance matched so closely, because reversibility would appear to be a significant constraint on the architecture, and one might expect large memory savings to come at the expense of classification error. Impact of numerical error. As described in Section 3.2, reconstructing the activations over many layers causes numerical errors to accumulate. In order to measure the magnitude of this effect, we computed the angle between the gradients computed using stored and reconstructed activations over the course of training. Figure 4 shows how this angle evolved over the course of training for a CIFAR-10 RevNet; while the angle increased during training, it remained small in magnitude. Figure 3: Training curves for ResNet-101 vs. RevNet-104 on ImageNet, with both networks having approximately the same depth and number of free parameters. Left: training cross entropy; Right: classification error, where dotted lines indicate training, and solid lines validation. 0 5 10 15 20 No. epochs 0 2 4 6 8 10 A n g le ( d e g re e s) RevNet-164 CIFAR-10 Gradient Error 0 2 4 6 8 10 12 14 16 No. epochs 10-4 10-3 10-2 10-1 100 101 T ra in L o ss RevNet-164 CIFAR-10 Train Loss Stored Activations Reconstructed Activations 0 2 4 6 8 10 12 14 16 No. epochs 0.00% 10.00% 20.00% 30.00% 40.00% 50.00% C la ss if ic a ti o n e rr o r RevNet-164 CIFAR-10 Top-1 Error Stored Activations Reconstructed Activations Figure 4: Left: angle (degrees) between the gradient computed using stored and reconstructed activations throughout training. While the angle grows during training, it remains small in magnitude. We measured 4 more epochs after regular training length and did not observe any instability. Middle: training cross entropy; Right: classification error, where dotted lines indicate training, and solid lines validation; No meaningful difference in training efficiency or final performance was observed between stored and reconstructed activations. Figure 4 also shows training curves for CIFAR-10 networks trained using both methods of computing gradients. Despite the numerical error from reconstructing activations, both methods performed almost indistinguishably in terms of the training efficiency and the final performance. 6 Conclusion and Future Work We introduced RevNets, a neural network architecture where the activations for most layers need not be stored in memory. We found that RevNets provide considerable reduction in the memory footprint at little or no cost to performance. As future work, we are currently working on applying RevNets to the task of semantic segmentation, the performance of which is limited by a critical memory bottleneck — the input image patch needs to be large enough to process high resolution images; meanwhile, the batch size also needs to be large enough to perform effective batch normalization (e.g. [36]). We also intend to develop reversible recurrent neural net architectures; this is a particularly interesting use case, because weight sharing implies that most of the memory cost is due to storing the activations (rather than parameters). Another interesting direction is predicting the activations of previous layers’ activation, similar to synthetic gradients. We envision our reversible block as a module which will soon enable training larger and more powerful networks with limited computational resources. 7 Appendix 7.1 Experiment details For our CIFAR-10/100 experiments, we fixed the mini-batch size to be 100. The learning rate was initialized to 0.1 and decayed by a factor of 10 at 40K and 60K training steps, training for a total of 80K steps. The weight decay constant was set to 2× 10−4 and the momentum was set to 0.9. We subtracted the mean image, and augmented the dataset with random cropping and random horizontal flipping. For our ImageNet experiments, we fixed the mini-batch size to be 256, split across 4 Titan X GPUs with data parallelism [28]. We employed synchronous SGD [4] with momentum of 0.9. The model was trained for 600K steps, with factor-of-10 learning rate decays scheduled at 160K, 320K, and 480K steps. Weight decay was set to 1 × 10−4. We applied standard input preprocessing and data augmentation used in training Inception networks [30]: pixel intensity rescaled to within [0, 1], random cropping of size 224 × 224 around object bounding boxes, random scaling, random horizontal flipping, and color distortion, all of which are available in TensorFlow. For the original ResNet-101, We were unable to fit a mini-batch size of 256 on 4 GPUs, so we instead averaged the gradients from two serial runs with mini-batch size 128 (32 per GPU). For the RevNet, we were able to fit a mini-batch size of 256 on 4 GPUs (i.e. 64 per GPU). 7.2 Memory savings Fully realizing the theoretical gains of RevNets can be a non-trivial task and require precise low-level GPU memory management. We experimented with two different implementations within TensorFlow: With the first, we were able to reach reasonable spatial gains using “Tensor Handles” provided by TensorFlow, which preserve the activations of graph nodes between calls to session.run. Multiple session.run calls ensures that TensorFlow frees up activations that will not be referenced later. We segment our computation graph into separate sections and save the bordering activations and gradients into the persistent Tensor Handles. During the forward pass of the backpropagation algorithm, each section of the graph is executed sequentially with the input tensors being reloaded from the previous section and the output tensors being saved for use in the subsequent section. We empirically verified the memory gain by fitting at least twice the number of examples while training ImageNet. Each GPU can now fit a mini-batch size of 128 images, compared the original ResNet, which can only fit a mini-batch size of 32. The graph splitting trick brings only a small computational overhead (around 10%). The second and most significant spatial gains were made by implementing each residual stack as a tf.while_loop with the back_prop parameter set to False. This setting ensures that activations of each layer in the residual stack (aside from the last) are discarded from memory immediately after their utility expires. We use the tf.while_loops for both the forward and backward passes of the layers, ensuring both efficiently discard activations. Using this implementation we were able to train a 600-layer RevNet on the ImageNet image classification challenge on a single GPU; despite being prohibitively slow to train this demonstrates the potential for massive savings in spatial costs of training extremely deep networks.
1. What is the focus of the paper being reviewed? 2. What are the strengths of the proposed approach, particularly regarding storage requirements? 3. What are the weaknesses or limitations of the method, such as its applicability to invertible layers only? 4. Are there any recent works that should be compared or discussed in relation to the proposed approach, like decoupled neural interfaces (DNI)? 5. How does the reviewer assess the overall interest and impact of the paper on practitioners in the field? 6. Are there any minor comments or suggestions regarding the experimental section or the presentation of results?
Review
Review The authors introduce “RevNets”, which avoid storing (some) activations by utilizing computational blocks that are trivial to invert (i.e. y1=x1+f(x2), y2=x2 + g(y1) ). Revnets match the performance of ResNets with the same number of parameters, and in practice RevNets appear to save ~4X in storage at the cost of a ~2X increase in computation. Interestingly, the reversible blocks are also volume preserving, which is not explicitly discussed, but should be, because this is a potential limitation. The approach of reconstructing activations rather than storing them is only applicable to invertible layers, and so while requiring only O(1) storage for invertible layers, succeeds in only a 4X gain in storage requirements (which is nevertheless impressive). One concern I have is that the recent work on decoupled neural interfaces (DNI) is not adequately discussed or compared to (DNI also requires O(1) storage, and estimates error signals [and optionally input values] analogously to how value functions are learned in reinforcement learning). While DNI is not lossless as the authors mention, preliminary indications are that conditional DNI (cDNI, e.g. on labels) is quite effective https://arxiv.org/pdf/1608.05343.pdf, figure 7. DNI has other advantages as well, but indeed is not fully evolved. Nevertheless I think that DNI should be included in table 1, and discussed more thoroughly (If cDNI is highly effective on large scale tasks it would subsume Revnets). Overall, I believe that this paper will be of interest to practitioners in the field, and the idea, while straightforward, is interesting. Minor comments: - Checkpointing is straightforward and general, and may require less overhead, and so should probably be directly compared to at least under the same constraints (store corr. non-invertible layers). More generally, the experiments section should be more explicit wrt the realized memory/compute trade-off. “We empirically verified the memory gain by fitting at least twice the number of examples while training ImageNet” - This confused me---is the gain not 4X? --- Authors: Thank you for your feedback. I've updated my recommendation since 1) DNI is not yet officially published, and 2) the practical memory advantages of RevNets have/will be made clear in the final version of the paper. Good luck!
NIPS
Title The Reversible Residual Network: Backpropagation Without Storing Activations Abstract Deep residual networks (ResNets) have significantly pushed forward the state-ofthe-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual Network (RevNet), a variant of ResNets where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. We demonstrate the effectiveness of RevNets on CIFAR-10, CIFAR-100, and ImageNet, establishing nearly identical classification accuracy to equally-sized ResNets, even though the activation storage requirements are independent of depth. 1 Introduction Over the last five years, deep convolutional neural networks have enabled rapid performance improvements across a wide range of visual processing tasks [19, 26, 20]. For the most part, the state-of-the-art networks have been growing deeper. For instance, deep residual networks (ResNets) [13] are the state-of-the-art architecture across multiple computer vision tasks [19, 26, 20]. The key architectural innovation behind ResNets was the residual block, which allows information to be passed directly through, making the backpropagated error signals less prone to exploding or vanishing. This made it possible to train networks with hundreds of layers, and this vastly increased depth led to significant performance gains. Nearly all modern neural networks are trained using backpropagation. Since backpropagation requires storing the network’s activations in memory, the memory cost is proportional to the number of units in the network. Unfortunately, this means that as networks grow wider and deeper, storing the activations imposes an increasing memory burden, which has become a bottleneck for many applications [34, 37]. Graphics processing units (GPUs) have limited memory capacity, leading to constraints often exceeded by state-of-the-art architectures, some of which reach over one thousand layers [13]. Training large networks may require parallelization across multiple GPUs [7, 28], which is both expensive and complicated to implement. Due to memory constraints, modern architectures are often trained with a mini-batch size of 1 (e.g. [34, 37]), which is inefficient for stochastic gradient methods [11]. Reducing the memory cost of storing activations would significantly improve our ability to efficiently train wider and deeper networks. ∗These authors contributed equally. Code available at https://github.com/renmengye/revnet-public 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. We present Reversible Residual Networks (RevNets), a variant of ResNets which is reversible in the sense that each layer’s activations can be computed from the subsequent reversible layer’s activations. This enables us to perform backpropagation without storing the activations in memory, with the exception of a handful of non-reversible layers. The result is a network architecture whose activation storage requirements are independent of depth, and typically at least an order of magnitude smaller compared with equally sized ResNets. Surprisingly, constraining the architecture to be reversible incurs no noticeable loss in performance: in our experiments, RevNets achieved nearly identical classification accuracy to standard ResNets on CIFAR-10, CIFAR-100, and ImageNet, with only a modest increase in the training time. 2 Background 2.1 Backpropagation Backpropagation [25] is a classic algorithm for computing the gradient of a cost function with respect to the parameters of a neural network. It is used in nearly all neural network algorithms, and is now taken for granted in light of neural network frameworks which implement automatic differentiation [1, 2]. Because achieving the memory savings of our method requires manual implementation of part of the backprop computations, we briefly review the algorithm. We treat backprop as an instance of reverse mode automatic differentiation [24]. Let v1, . . . , vK denote a topological ordering of the nodes in the network’s computation graph G, where vK denotes the cost function C. Each node is defined as a function fi of its parents in G. Backprop computes the total derivative dC/dvi for each node in the computation graph. This total derivative defines the the effect on C of an infinitesimal change to vi, taking into account the indirect effects through the descendants of vk in the computation graph. Note that the total derivative is distinct from the partial derivative ∂f/∂xi of a function f with respect to one of its arguments xi, which does not take into account the effect of changes to xi on the other arguments. To avoid using a small typographical difference to represent a significant conceptual difference, we will denote total derivatives using vi = dC/dvi. Backprop iterates over the nodes in the computation graph in reverse topological order. For each node vi, it computes the total derivative vi using the following rule: vi = ∑ j∈Child(i) ( ∂fj ∂vi )> vj , (1) where Child(i) denotes the children of node vi in G and ∂fj/∂vi denotes the Jacobian matrix. 2.2 Deep Residual Networks One of the main difficulties in training very deep networks is the problem of exploding and vanishing gradients, first observed in the context of recurrent neural networks [3]. In particular, because a deep network is a composition of many nonlinear functions, the dependencies across distant layers can be highly complex, making the gradient computations unstable. Highway networks [29] circumvented this problem by introducing skip connections. Similarly, deep residual networks (ResNets) [13] use a functional form which allows information to pass directly through the network, thereby keeping the computations stable. ResNets currently represent the state-of-the-art in object recognition [13], semantic segmentation [35] and image generation [32]. Outside of vision, residuals have displayed impressive performance in audio generation [31] and neural machine translation [16], ResNets are built out of modules called residual blocks, which have the following form: y = x+ F(x), (2) where F , a function called the residual function, is typically a shallow neural net. ResNets are robust to exploding and vanishing gradients because each residual block is able to pass signals directly through, allowing the signals to be propagated faithfully across many layers. As displayed in Figure 1, residual functions for image recognition generally consist of stacked batch normalization ("BN") [14], rectified linear activation ("ReLU") [23] and convolution layers (with filters of shape three "C3" and one "C1"). As in He et al. [13], we use two residual block architectures: the basic residual function (Figure 1 right-top) and the bottleneck residual function (Figure 1 right-bottom). The bottleneck residual consists of three convolutions, the first is a point-wise convolution which reduces the dimensionality of the feature dimension, the second is a standard convolution with filter size 3, and the final point-wise convolution projects into the desired output feature depth. a(x) = ReLU(BN(x))) ck(x) = Convk×k(a(x)) Basic(x) = c3(c3(x)) Bottleneck(x) = c1(c3(c1(x))) (3) 2.3 Reversible Architectures Various reversible neural net architectures have been proposed, though for motivations distinct from our own. Deco and Brauer [8] develop a similar reversible architecture to ensure the preservation of information in unsupervised learning contexts. The proposed architecture is indeed residual and constructed to produce a lower triangular Jacobian matrix with ones along the diagonal. In Deco and Brauer [8], the residual connections are composed of all ‘prior’ neurons in the layer, while NICE and our own architecture segments a layer into pairs of neurons and additively connect one with a residual function of the other. Maclaurin et al. [21] made use of the reversible nature of stochastic gradient descent to tune hyperparameters via gradient descent. Our proposed method is inspired by nonlinear independent components estimation (NICE) [9, 10], an approach to unsupervised generative modeling. NICE is based on learning a non-linear bijective transformation between the data space and a latent space. The architecture is composed of a series of blocks defined as follows, where x1 and x2 are a partition of the units in each layer: y1 = x1 y2 = x2 + F(x1) (4) Because the model is invertible and its Jacobian has unit determinant, the log-likelihood and its gradients can be tractably computed. This architecture imposes some constraints on the functions the network can represent; for instance, it can only represent volume-preserving mappings. Follow-up work by Dinh et al. [10] addressed this limitation by introducing a new reversible transformation: y1 = x1 y2 = x2 exp(F(x1)) + G(x1). (5) Here, represents the Hadamard or element-wise product. This transformation has a non-unit Jacobian determinant due to multiplication by exp (F(x1)). 3 Methods We now introduce Reversible Residual Networks (RevNets), a variant of Residual Networks which is reversible in the sense that each layer’s activations can be computed from the next layer’s activations. We discuss how to reconstruct the activations online during backprop, eliminating the need to store the activations in memory. 3.1 Reversible Residual Networks RevNets are composed of a series of reversible blocks, which we now define. We must partition the units in each layer into two groups, denoted x1 and x2; for the remainder of the paper, we assume this is done by partitioning the channels, since we found this to work the best in our experiments.2 Each reversible block takes inputs (x1, x2) and produces outputs (y1, y2) according to the following additive coupling rules – inspired by NICE’s [9] transformation in Equation 4 – and residual functions F and G analogous to those in standard ResNets: y1 = x1 + F(x2) y2 = x2 + G(y1) (6) Each layer’s activations can be reconstructed from the next layer’s activations as follows: x2 = y2 − G(y1) x1 = y1 −F(x2) (7) Note that unlike residual blocks, reversible blocks must have a stride of 1 because otherwise the layer discards information, and therefore cannot be reversible. Standard ResNet architectures typically have a handful of layers with a larger stride. If we define a RevNet architecture analogously, the activations must be stored explicitly for all non-reversible layers. 3.2 Backpropagation Without Storing Activations To derive the backprop procedure, it is helpful to rewrite the forward (left) and reverse (right) computations in the following way: z1 = x1 + F(x2) z1 = y1 y2 = x2 + G(z1) x2 = y2 − G(z1) (8) y1 = z1 x1 = z1 −F(x2) Even though z1 = y1, the two variables represent distinct nodes of the computation graph, so the total derivatives z1 and y1 are different. In particular, z1 includes the indirect effect through y2, while y1 does not. This splitting lets us implement the forward and backward passes for reversible blocks in a modular fashion. In the backwards pass, we are given the activations (y1, y2) and their total derivatives (y1, y2) and wish to compute the inputs (x1, x2), their total derivatives (x1, x2), and the total derivatives for any parameters associated with F and G. (See Section 2.1 for our backprop 2The possibilities we explored included columns, checkerboard, rows and channels, as done by [10]. We found that performance was consistently superior using the channel-wise partitioning scheme and comparable across the remaining options. We note that channel-wise partitioning has also been explored in the context of multi-GPU training via ’grouped’ convolutions [18], and more recently, convolutional neural networks have seen significant success by way of ’separable’ convolutions [27, 6]. notation.) We do this by combining the reconstruction formulas (Eqn. 8) with the backprop rule (Eqn. 1). The resulting algorithm is given as Algorithm 1.3 By applying Algorithm 1 repeatedly, one can perform backprop on a sequence of reversible blocks if one is given simply the activations and their derivatives for the top layer in the sequence. In general, a practical architecture would likely also include non-reversible layers, such as subsampling layers; the inputs to these layers would need to be stored explicitly during backprop. However, a typical ResNet architecture involves long sequences of residual blocks and only a handful of subsampling layers; if we mirror the architecture of a ResNet, there would be only a handful of non-reversible layers, and the number would not grow with the depth of the network. In this case, the storage cost of the activations would be small, and independent of the depth of the network. Computational overhead. In general, for a network with N connections, the forward and backward passes of backprop require approximately N and 2N add-multiply operations, respectively. For a RevNet, the residual functions each must be recomputed during the backward pass. Therefore, the number of operations required for reversible backprop is approximately 4N , or roughly 33% more than ordinary backprop. (This is the same as the overhead introduced by checkpointing [22].) In practice, we have found the forward and backward passes to be about equally expensive on GPU architectures; if this is the case, then the computational overhead of RevNets is closer to 50%. Algorithm 1 Reversible Residual Block Backprop 1: function BLOCKREVERSE((y1, y2), (y1, y2)) 2: z1 ← y1 3: x2 ← y2 − G(z1) 4: x1 ← z1 −F(x2) 5: z1 ← y1 + ( ∂G ∂z1 )> y2 . ordinary backprop 6: x2 ← y2 + ( ∂F ∂x2 )> z1 . ordinary backprop 7: x1 ← z1 8: wF ← ( ∂F ∂wF )> z1 . ordinary backprop 9: wG ← ( ∂G ∂wG )> y2 . ordinary backprop 10: return (x1, x2) and (x1, x2) and (wF , wG) 11: end function Modularity. Note that Algorithm 1 is agnostic to the form of the residual functions F and G. The steps which use the Jacobians of these functions are implemented in terms of ordinary backprop, which can be achieved by calling automatic differentiation routines (e.g. tf.gradients or Theano.grad). Therefore, even though implementing our algorithm requires some amount of manual implementation of backprop, one does not need to modify the implementation in order to change the residual functions. Numerical error. While Eqn. 8 reconstructs the activations exactly when done in exact arithmetic, practical float32 implementations may accumulate numerical error during backprop. We study the effect of numerical error in Section 5.2; while the error is noticeable in our experiments, it does not significantly affect final performance. We note that if numerical error becomes a significant issue, one could use fixed-point arithmetic on the x’s and y’s (but ordinary floating point to compute F and G), analogously to [21]. In principle, this would enable exact reconstruction while introducing little overhead, since the computation of the residual functions and their derivatives (which dominate the computational cost) would be unchanged. 4 Related Work A number of steps have been taken towards reducing the storage requirements of extremely deep neural networks. Much of this work has focused on the modification of memory allocation within the training algorithms themselves [1, 2]. Checkpointing [22, 5, 12] is one well-known technique which 3We assume for notational clarity that the residual functions do not share parameters, but Algorithm 1 can be trivially extended to a network with weight sharing, such as a recurrent neural net. trades off spatial and temporal complexity; during backprop, one stores a subset of the activations (called checkpoints) and recomputes the remaining activations as required. Martens and Sutskever [22] adopted this technique in the context of training recurrent neural networks on a sequence of length T using backpropagation through time [33], storing every d √ T e layers and recomputing the intermediate activations between each during the backward pass. Chen et al. [5] later proposed to recursively apply this strategy on the sub-graph between checkpoints. Gruslys et al. [12] extended this approach by applying dynamic programming to determine a storage strategy which minimizes the computational cost for a given memory budget. To analyze the computational and memory complexity of these alternatives, assume for simplicity a feed-forward network consisting of L identical layers. Again, for simplicity, assume the units are chosen such that the cost of forward propagation or backpropagation through a single layer is 1, and the memory cost of storing a single layer’s activations is 1. In this case, ordinary backpropagation has computational cost 2L and storage cost L for the activations. The method of Martens and Sutskever [22] requres 2 √ L storage, and it demands an additional forward computation for each layer, leading to a total computational cost of 3L. The recursive algorithm of Chen et al. [5] reduces the required memory to O(logL), while increasing the computational cost to O(L logL). In comparison to these, our method incurs O(1) storage cost — as only a single block must be stored — and computational cost of 3L. The time and space complexities of these methods are summarized in Table 1. Another approach to saving memory is to replace backprop itself. The decoupled neural interface [15] updates each weight matrix using a gradient approximation, termed the synthetic gradient, computed based on only the node’s activations instead of the global network error. This removes any long-range gradient computation dependencies in the computation graph, leading to O(1) activation storage requirements. However, these savings are achieved only after the synthetic gradient estimators have been trained; that training requires all the activations to be stored. 5 Experiments We experimented with RevNets on three standard image classification benchmarks: CIFAR-10, CIFAR-100, [17] and ImageNet [26]. In order to make our results directly comparable with standard ResNets, we tried to match both the computational depth and the number of parameters as closely as possible. We observed that each reversible block has a computation depth of two original residual blocks. Therefore, we reduced the total number of residual blocks by approximately half, while approximately doubling the number of channels per block, since they are partitioned into two. Table 2 shows the details of the RevNets and their corresponding traditional ResNet. In all of our experiments, we were interested in whether our RevNet architectures (which are far more memory efficient) were able to match the classification accuracy of ResNets of the same size. 5.1 Implementation We implemented the RevNets using the TensorFlow library [1]. We manually make calls to TensorFlow’s automatic differentiation method (i.e. tf.gradients) to construct the backward-pass computation graph without referencing activations computed in the forward pass. While building the backward graph, we reconstruct the input activations (x̂1, x̂2) for each block (Equation 8); Second, we apply tf.stop_gradient on the reconstructed inputs to prevent auto-diff from traversing into the reconstructions’ computation graph, then call the forward functions again to compute (ŷ1, ŷ2) (Equation 8). Lastly, we use auto-diff to traverse from (ŷ1, ŷ2) to (x̂1, x̂2) and the parameters (wF , wG). This implementation leverages the convenience of the auto-diff functionality to avoid manually deriving gradients; however the computational cost becomes 5N , compared with 4N for Algorithm 1, and 3N for ordinary backpropagation (see Section 3.2). The full theoretical efficiency can be realized by reusing the F and G graphs’ activations that were computed in the reconstruction steps (lines 3 and 4 of Algorithm 1). 5.2 RevNet performance Our ResNet implementation roughly matches the previously reported classification error rates [13]. As shown in Table 3, our RevNets roughly matched the error rates of traditional ResNets (of roughly equal computational depth and number of parameters) on CIFAR-10 & 100 as well as ImageNet (Table 4). In no condition did the RevNet underperform the ResNet by more than 0.5%, and in some cases, RevNets achieved slightly better performance. Furthermore, Figure 3 compares ImageNet training curves of the ResNet and RevNet architectures; reversibility did not lead to any noticeable per-iteration slowdown in training. (As discussed above, each RevNet update is about 1.5-2× more expensive, depending on the implementation.) We found it surprising that the performance matched so closely, because reversibility would appear to be a significant constraint on the architecture, and one might expect large memory savings to come at the expense of classification error. Impact of numerical error. As described in Section 3.2, reconstructing the activations over many layers causes numerical errors to accumulate. In order to measure the magnitude of this effect, we computed the angle between the gradients computed using stored and reconstructed activations over the course of training. Figure 4 shows how this angle evolved over the course of training for a CIFAR-10 RevNet; while the angle increased during training, it remained small in magnitude. Figure 3: Training curves for ResNet-101 vs. RevNet-104 on ImageNet, with both networks having approximately the same depth and number of free parameters. Left: training cross entropy; Right: classification error, where dotted lines indicate training, and solid lines validation. 0 5 10 15 20 No. epochs 0 2 4 6 8 10 A n g le ( d e g re e s) RevNet-164 CIFAR-10 Gradient Error 0 2 4 6 8 10 12 14 16 No. epochs 10-4 10-3 10-2 10-1 100 101 T ra in L o ss RevNet-164 CIFAR-10 Train Loss Stored Activations Reconstructed Activations 0 2 4 6 8 10 12 14 16 No. epochs 0.00% 10.00% 20.00% 30.00% 40.00% 50.00% C la ss if ic a ti o n e rr o r RevNet-164 CIFAR-10 Top-1 Error Stored Activations Reconstructed Activations Figure 4: Left: angle (degrees) between the gradient computed using stored and reconstructed activations throughout training. While the angle grows during training, it remains small in magnitude. We measured 4 more epochs after regular training length and did not observe any instability. Middle: training cross entropy; Right: classification error, where dotted lines indicate training, and solid lines validation; No meaningful difference in training efficiency or final performance was observed between stored and reconstructed activations. Figure 4 also shows training curves for CIFAR-10 networks trained using both methods of computing gradients. Despite the numerical error from reconstructing activations, both methods performed almost indistinguishably in terms of the training efficiency and the final performance. 6 Conclusion and Future Work We introduced RevNets, a neural network architecture where the activations for most layers need not be stored in memory. We found that RevNets provide considerable reduction in the memory footprint at little or no cost to performance. As future work, we are currently working on applying RevNets to the task of semantic segmentation, the performance of which is limited by a critical memory bottleneck — the input image patch needs to be large enough to process high resolution images; meanwhile, the batch size also needs to be large enough to perform effective batch normalization (e.g. [36]). We also intend to develop reversible recurrent neural net architectures; this is a particularly interesting use case, because weight sharing implies that most of the memory cost is due to storing the activations (rather than parameters). Another interesting direction is predicting the activations of previous layers’ activation, similar to synthetic gradients. We envision our reversible block as a module which will soon enable training larger and more powerful networks with limited computational resources. 7 Appendix 7.1 Experiment details For our CIFAR-10/100 experiments, we fixed the mini-batch size to be 100. The learning rate was initialized to 0.1 and decayed by a factor of 10 at 40K and 60K training steps, training for a total of 80K steps. The weight decay constant was set to 2× 10−4 and the momentum was set to 0.9. We subtracted the mean image, and augmented the dataset with random cropping and random horizontal flipping. For our ImageNet experiments, we fixed the mini-batch size to be 256, split across 4 Titan X GPUs with data parallelism [28]. We employed synchronous SGD [4] with momentum of 0.9. The model was trained for 600K steps, with factor-of-10 learning rate decays scheduled at 160K, 320K, and 480K steps. Weight decay was set to 1 × 10−4. We applied standard input preprocessing and data augmentation used in training Inception networks [30]: pixel intensity rescaled to within [0, 1], random cropping of size 224 × 224 around object bounding boxes, random scaling, random horizontal flipping, and color distortion, all of which are available in TensorFlow. For the original ResNet-101, We were unable to fit a mini-batch size of 256 on 4 GPUs, so we instead averaged the gradients from two serial runs with mini-batch size 128 (32 per GPU). For the RevNet, we were able to fit a mini-batch size of 256 on 4 GPUs (i.e. 64 per GPU). 7.2 Memory savings Fully realizing the theoretical gains of RevNets can be a non-trivial task and require precise low-level GPU memory management. We experimented with two different implementations within TensorFlow: With the first, we were able to reach reasonable spatial gains using “Tensor Handles” provided by TensorFlow, which preserve the activations of graph nodes between calls to session.run. Multiple session.run calls ensures that TensorFlow frees up activations that will not be referenced later. We segment our computation graph into separate sections and save the bordering activations and gradients into the persistent Tensor Handles. During the forward pass of the backpropagation algorithm, each section of the graph is executed sequentially with the input tensors being reloaded from the previous section and the output tensors being saved for use in the subsequent section. We empirically verified the memory gain by fitting at least twice the number of examples while training ImageNet. Each GPU can now fit a mini-batch size of 128 images, compared the original ResNet, which can only fit a mini-batch size of 32. The graph splitting trick brings only a small computational overhead (around 10%). The second and most significant spatial gains were made by implementing each residual stack as a tf.while_loop with the back_prop parameter set to False. This setting ensures that activations of each layer in the residual stack (aside from the last) are discarded from memory immediately after their utility expires. We use the tf.while_loops for both the forward and backward passes of the layers, ensuring both efficiently discard activations. Using this implementation we were able to train a 600-layer RevNet on the ImageNet image classification challenge on a single GPU; despite being prohibitively slow to train this demonstrates the potential for massive savings in spatial costs of training extremely deep networks.
1. What is the main contribution of the paper regarding residual networks? 2. What are the strengths of the paper, particularly in terms of its experiments and potential impact on training large models? 3. What are the weaknesses of the paper, such as the need for more experiments and discussion on the potential drawbacks of the proposed approach? 4. How does the reviewer assess the clarity and quality of the paper's writing? 5. Are there any minor issues or typos in the paper that can be easily fixed?
Review
Review The paper introduces a reversible block for residual networks that has the benefit of not needing to store all of the forward pass activations for the backward pass. This enables training of larger models or minibatches, as the memory footprint during training is smaller. As the memory of GPUs is often a bottleneck when training very large models, the paper is a welcome addition to the line of literature on decreasing this footprint. By training on both CIFAR and ImageNet, the experiments focus on two widely used benchmarks. The experiments seem solid and achieve compelling results. The authors also confirm empirically that numerical instabilities are not a significant problem, which is good. To make the paper even stronger, I would have been interested in seeing even more experiments. It is good that the model is tested both on CIFAR (10 and 100) and ImageNet, but for example SVHN would have been a fairly "cheap" addition to the set of standard benchmarks. The performance on RNNs, that the authors also discuss, would be very interesting to see. In addition, the main concern I have with using the model in practice is the potential drawback of not discarding information (hence the reversibility) - I'm not sure exactly what form that discussion would take, but a slightly deeper discussion on that issue would be helpful. The second paper on NICE might give some ideas for how to do that. A minor issue is also that it is somewhat unclear to the reader how large a typical memory footprint of the activations is compared to the footprint of the weights for a particular minibatch size in the resnets. This obviously depends on many factors, but it would be really helpful (for instance in the appendix) to make some back-of-the-envelope calculations that would give an indication of e.g. how much deeper models or how much larger minibatches one could train, given an optimized code, for typical model sizes. The paper is generally well written, although some minor typos are still present (p1: "simply by widening (increasing the convolutions' filter count) fewer layer" should probably read "...layers", p2: "...derivative defines the the effect..." has one "the" too much, p2: "change to v_i, taking into account the indirect effects through the descendants of v_k" should probably read "... descendants of v_i") which will be easy to fix by carefully rereading the paper a few times. The drawbacks are however fairly minor, and the strength of the paper is that the idea is fairly simple, has been shown with sufficient empirical results to work, and is well presented. In my view, this is an idea that would interest the NIPS audience.
NIPS
Title Online Matching in Sparse Random Graphs: Non-Asymptotic Performances of Greedy Algorithm Abstract Motivated by sequential budgeted allocation problems, we investigate online matching problems where connections between vertices are not i.i.d., but they have fixed degree distributions – the so-called configuration model. We estimate the competitive ratio of the simplest algorithm, GREEDY, by approximating some relevant stochastic discrete processes by their continuous counterparts, which are solutions of an explicit system of partial differential equations. This technique gives precise bounds on the estimation errors, with arbitrarily high probability as the problem size increases. In particular, it allows the formal comparison between different configuration models. We also prove that, quite surprisingly, GREEDYcan have better performance guarantees than RANKING, another celebrated algorithm for online matching that usually outperforms the former. 1 Introduction Finding matchings in bipartite graphs (U[V , E), where E ⇢ U⇥V is a set of edges, is a long-standing problem with different motivations and approaches [Godsil, 1981, Zdeborová and Mézard, 2006, Lovász and Plummer, 2009, Bordenave et al., 2013]. If U is seen as a set of resources and V as demands, the objective is to allocate as many resources to demands (an allocation - or a matching - between u and v is admissible if (u, v) 2 E) with the constraint that a resource is allocated to only one demand and vice-versa. Motivated particularly by practical applications to Internet advertising, the online variant of this problem is receiving increasing attention (we refer to the excellent survey [Mehta, 2012] for more applications, specific settings, results and techniques). In this case, the set of vertices U is present at the beginning and the graph unveils sequentially: vertices v 2 V are observed sequentially, one after the other, along with the edges they belong to. An online algorithm must decide, right after observing vk and its associated set of edges Ek := {(u, vk) 2 E} to match it to some other vertex u 2 U , at the conditions that (u, vk) 2 Ek and u 2 U has not been matched yet. The performance of an online algorithm is evaluated by its competitive ratio, which is the ratio between the size of the matching it has created and the highest possible matching in hindsight [Feldman et al., 2009]. This theoretical setting is particularly well suited for online advertising: U is the set of campaigns/ads that an advertiser can run and users v1, v2, . . . , vT arrive sequentially [Mehta, 2012, Manshadi et al., 2012]. Some of them are eligible for a large subset of campaigns, others are not (usually based 35th Conference on Neural Information Processing Systems (NeurIPS 2021). on their attributes/features, such as the geographic localization, the browsing history, or any other relevant information). The objective of an advertiser (in this over-simplified model) is to maximize the number of displayed ads. In practice, campaigns/ads are not displayed only once but have a maximal budget of impressions (say, a specific ad can be displayed only 10.000 times each day). A possible trick consists of duplicating the vertices of U as many times as the budget. However, this results in strong and undesirable correlations between vertices. It is, therefore, more appropriate to consider a bipartite graph with capacities and admissible matchings as subsets of edges such that each vertex belongs to several different edges, but not more than their associated capacities ! 2 N (a vertex v 2 V is matched once while u 2 U can be matched !u times). This online matching problem with capacities has been quite extensively studied. It is known that GREEDY, which matches all incoming vertices to any available neighbor has a competitive ratio of 1/2 in the worst case, albeit it achieves 1 1/e as soon as the incoming vertices arrive in Random Order [Goel and Mehta, 2008b]. The worst-case optimal algorithm is the celebrated RANKING, which achieves 1 1/e on any instance [Karp et al., 1990, Devanur et al., 2013, Birnbaum and Mathieu, 2008], and also has better guarantees in the Random Order setting [Mahdian and Yan, 2011]. Beyond the adversarial setting, the following stochastic setting has been considered: there exist a finite set of L “base” vertices v(1), . . . , v(L) associated to base edge-sets E(1), . . . , E(L). When a vertex vk arrives, its type ✓k 2 {1, . . . , L} is drawn iid from some distribution (either known beforehand or not) and then its edge set is set as Ek = E(✓k). In the context where the distribution is known, algorithms with much better competitive ratios than GREEDY or RANKING were designed [Manshadi et al., 2012, Jaillet and Lu, 2014, Brubach et al., 2019], specifically with a competitive ratio of 1 2/e2 when the expected number of arrival of each type are integral and 0.706 without this assumption. Notably, those competitive ratios still hold with Poisson arrival rates rather than a fixed number of arrivals. On a side note, a vast line of work considers online matching in weighted graphs [Devanur et al., 2012, Goel and Mehta, 2008a, Mehta, 2012], which is outside the scope of this paper. However, it is still worth noting that the unweighted graph is a weighted graph with all weights equal. This model of the stochastic setting is quite interesting but rather strong: it lacks flexibility and cannot be used to represent some challenging instances ( for example when the degrees of each vertex U increase linearly with the number of vertices in V , or when the set U of campaigns must be fixed so that the model is well specified, etc...). Another tentative is to consider Erdős-Rényi graphs assuming that each possible edge is present in U ⇥V with some fixed probability and independently of the other edges (see [Mastin and Jaillet, 2013]). The most interesting and challenging setting corresponds to the so-called sparse regime where each vertex of U has an expected degree independent of the size n of V , which amounts to take a probability of connection equal to c/n. Interestingly enough, even the analysis of the simplest GREEDY algorithm is quite challenging and already insightful in those models [Borodin et al., 2018, Arnosti, 2019, Dyer et al., 1993, Mastin and Jaillet, 2013]. Unfortunately, although this Erdős-Rényi model is compatible with growing sets U and V , it also turns out to be quite restrictive. The main reason is that the approximate Poisson degree distribution of the vertices has light-tail and does not allow for the appearance of the so-called scale-free property satisfied by many real-world networks [Barabási et al., 2000, Van Der Hofstad, 2016]. We, therefore, consider a more appropriate random graphs generation process called configuration model, introduced by [Bender and Canfield, 1978] and [Bollobás, 1980]. The optimal matching of this model has been computed in [Bordenave et al., 2013]. The configuration model is particularly well suited to handle different situations such as the following one. Assume that campaigns can either be “intensive” (with many eligible users) or “selective/light” (few eligible users), with an empirical proportion of, say, 20%/80%. Then whether an advertiser handles 100 campaigns at the same time or 10.000, it will always have roughly this proportion of intensive vs. light campaigns. Similarly, some users are more valuable than others, and are thus eligible for more campaigns than the others; the proportion of each type being independent of the total population size. The configuration model accommodates these observations by basically drawing iid degrees for vertices U and V (accordingly to some different unknown distributions for U and V) and then by finding a graph such that those degrees distribution are satisfied (up to negligible errors); as a consequence, the graphs generated are sparse, in the sense that the number of edges grows linearly with the number of vertices. Additionally, the configuration model is a well-suited random graph model which mimics a number of properties of real-world complex networks, while being analytically tractable. For instance, choosing power-law distributions for the degrees allows to obtain the so-called scale-free property (often observed in practice, as highlighted for the web by Faloutsos et al. [1999]). The configuration model also displays the so called “small-world phenomenon” (observed for instance in the graph of Facebook by Backstrom et al. [2012]) as its diameter is of logarithmic order. Main contribution We investigate the performances (in terms of expected competitive ratio) of the GREEDY matching algorithm in configuration models and we provide explicit quantitative results using stochastic approximation techniques [Wormald, 1995]; we prove that the increasing size of the random matching created is arbitrarily close to the solution of some explicit ODE. Solving the latter then gives in turn the solution to the original problem. The remaining of the paper is organized as follows. Section 2 describes precisely the problem and Theorem 1 is our first main result: it describes the performances of GREEDY in the capacity-less problem. The proof of Theorem 1 is delayed to Appendix D, but the main ideas and intuitions are provided in Section 3. The online matching with capacities problem is treated in Appendix A. 2 Online Matching Problems; Models and main result Consider a bipartite graph with capacities G = (U ,V, E ,!) where U = {1, . . . , N} and V = {1, . . . , T} are two finite set of vertices, E ⇢ (u, v), u 2 U , v 2 V is the set of edges and ! : U ! N⇤ is a capacity function. A matching M on G is a subset of edges e 2 E such that any vertex v 2 V is the endpoint of at most one edge e 2 M and any vertex u 2 U is the endpoint of at most !u edges in M . We will denote by M the set of matchings on G; the optimal matching M⇤ 2M is the one (or any one) with the highest cardinality, denoted by |M⇤|. The batched matching problem consists in finding any optimal matching M⇤ given a graph with capacities G; the online variant might be a bit more challenging, as the matching is constructed sequentially. Formally, the set of vertices U and their capacities ! are known from the start, and vertices v 2 V arrive sequentially (with the edges they belong to) and M0 = ;. At stage t 2 N – assuming a matching Mt 1 has been constructed –, a decision maker observes a new vertex1 vt and its associated set of edges {(u, vt);u 2 E}. If possible, one of these edges (ut, vt) is added to Mt 1, with the constraint that Mt = Mt 1 [ {(ut, vt)} is still a matching. The objective is to maximize the size of the constructed matching MT . The classical way to evaluate the performances of an algorithm is the competitive ratio, defined as |MT |/|M⇤| 2 [0, 1] (the higher the better). 2.1 Structured online matching via Configuration Model As mentioned before, the online matching problem can be quite difficult without additional structure. We will therefore assume that the vertex degrees in U and V have (at least asymptotically in N and T ) some given subGaussian2 distributions ⇡U and ⇡V , of respective expectation µU and µV and respective proxy-variance 2U and 2 V . Those numbers are related in the sense that we assume 3 that T = µUµV N 2 N. Given those degree distributions, the graphs we consider are random draws from a bipartite configuration model described below; for the sake of clarity, we first consider the capacity-less case (when !u = 1 for all u 2 U). Given ⇡U and ⇡V and N,T 1, let dU1 , . . . , dUN 2 N i.i.d. ⇠ ⇡U and dV1 , . . . , dVT 2 N i.i.d. ⇠ ⇡V be independent random variables; intuitively, those numbers are respectively the number of half-edges attached to vertex in U and V . Consider also two extra random variables dVT+1 = max NX i=1 dUi TX j=1 dVj , 0 and dUN+1 = max TX j=1 dVj NX i=1 dUi , 0 1Although the order of arrival is irrelevant to the models we studied, it could have an impact on other models. 2X is subGaussian with proxy-variance 2 if for any s 2 R,E[exp(sX)] exp ⇣ 2s2 2 ⌘ . Actually, we only need that ⇡U and ⇡V have some finite moment of order > 2. 3In the general case, consider T = bNµU/µVc. The proof is identical, up to a negligible 1/N error term so that equality between total degrees holds, i.e., PN+1 i=1 d U i = PT+1 j=1 d V j . Finally, a random (capacity- less) bipartite graph denoted by CM(dU ,dV ) is constructed with a uniform pairing of half-edges of U [ {N + 1} with half-edges of V [ {T + 1} and removing vertices T + 1 and N + 1 and their associated edges. These two artificially added vertices are just here to define a pairing between half-edges. Notice that, by the law of large numbers and since T = (µU/µV)N , dVT+1 = o(N) and dUN+1 = o(N) almost surely 4. The bipartite configuration model CM(dU ,dV) is then the random graph obtained by a uniform matching between the half-edges of U and the half-edges of V , where the random sequences dU = (dUi )i and dV = (dVj )j are defined as above. 2.2 Competitive ratio of GREEDY algorithm. Main result The first question to investigate in this structured setting is the computation of the (expected) competitive ratio of the simple algorithm GREEDY. It constructs a matching by sequentially adding any admissible edge uniformly at random. Describing it and stating our results require the following additional notations: for any e = (u, v) 2 E, u(e) = u (resp. v(e) = v) is the extremity of e in U (resp. V); the generating series of ⇡U and ⇡V are denoted by U and V and are defined as U (s) := X k 0 ⇡U (k)s k and V(s) := X k 0 ⇡V(k)s k. Our first main theorem, stated below, identifies the asymptotic size of the matching generated by GREEDY on the bipartite configuration model we have just defined. As the batched problem (i.e., computing the size of the optimal matching M⇤) is well understood [Bordenave et al., 2013], this quantity is sufficient to derive competitive ratios. Again, for the sake of presentation, we first assume that all capacities are fixed, equal to one; the general case is presented in Appendix A. Theorem 1. (Performances of GREEDY in the capacity-less case) Given N 1 and T = µUµV N , let MT be the matching built by GREEDY on CM(d U ,dV ) then the following convergence in probability holds: |MT | N P ! N!+1 1 U (1 G(1)). where G is the unique solution of the following ordinary differential equation: G0(s) = 1 V ⇣ 1 1µU 0 U (1 G(s)) ⌘ µV µU 0U (1 G(s)) ; G(0) = 0. (1) Moreover, for any s 2 [0, 1], if MT (s) is the matching obtained by GREEDY after seeing a proportion s of vertices of V , then |MT (s)| N P ! N!+1 1 U (1 G(s)). (2) Convergence rates are explicit; with probability exponentially large, at least 1 ⇣N exp( ⇠N c/2), sup s2[0,1] |MT (s)| N 1 U (1 G(s)) N c, where ⇣, ⇠, depend only on the (first two) moments of both ⇡V and ⇡U , and c is some universal constant (set arbitrarily as 1/20 in the proof). Theorem 1 generalizes to the case with capacities, see Sections A.1 and A.2. The details of the proof of Theorem 1 are postponed to Appendix D, but the main ideas are given in Section 3. 2.3 Examples, Instantiations and Corollaries We provide in this section some interesting examples and corollaries that illustrate the powerfulness of Theorem 1, and how it can be used to compare different situations. 4And even O( p N) with probability exponentially large in N as both distributions are sub-Gaussian. So the effects of those additional vertices can be neglected. 2.3.1 d-regular graphs The first typical example of random graphs are “ d-regular ”, for some d 2 N, i.e., graphs such that each vertex has an exact degree of d (to avoid trivial examples, we obviously assume d 2). It is non-trivial to sample a d-regular graph at random, yet it is easy to generate random graphs GN with the configuration model described above, with the specific choices of ⇡U = ⇡V = d, the Dirac mass at d. The downside is that GN is not exactly a d-regular bipartite random graph (as some vertices might be connected more than once, i.e., there might exist parallel edges). However, conditioned to be simple, i.e, without multiple edges and loops, it has the law of a uniform d-regular bipartite random graph. Moreover, the probability of being simple is bounded away from 0 [Van Der Hofstad, 2016]; as a consequence, any property holding with probability tending to 1 for GN , holds with probability tending to 1 for uniform d-regular bipartite random graphs. Finally, we also mention that Hall’s Theorem [Frieze and Karoński, 2016] implies that GN admits a perfect matching, so that |M⇤| = N . Instantiating Equation (1) to d-regular graphs yields that the competitive ratio of GREEDY converges, with probability 1, to 1 (1 G(1))d where G is the solution of the following ODE (1 G(s))d 1 1 (1 (1 G(s))d 1)d G0(s) = 1 d . (3) As expected, had we taken d = 1, then G(s) = s hence the competitive ratio of GREEDY is 1 (but again, d = 1-regular graphs are trivial). More interestingly, if d = 2, the ODE has a closed form solution: G(s) = exp( s2 ) 1, so that the competitive ratio of GREEDY converges to 4 p e (e+ 3) ' 0.877 1 1e ' 0.632, where the latter is a standard bound of the competitive ratio of GREEDY (for general, non-regular graphs) [Mehta, 2012]. Solving Equation (3) In the general case d 3, even if Equation (3) does not have a closed form solution, it is still possible to provide some insights. Notice first that the polynomial P (X) = 1 (1 (1 X)d 1)d admits n := d(d 1) roots, among which there is 1 with multiplicity d 1. If X is another root, then 1 (1 X)d 1 d = 1 , 1 (1 X)d 1 = e ik⇡ d , k = 1, . . . , d 1. Therefore, (1 X)d 1 = 1 e ik⇡ d , which admits d 1 distinct solutions for each k = 1, . . . , d 1. The resulting n := (d 1)2 distinct complex, denoted x1, . . . , xn, are the roots of P (X)/(1 X)d 1, so the ODE reduces to: y0(t)Q 1in y(t) xi = 1 d . (4) Since the following trivially holds: 1Q 1in(X xi) = X 1in 1Q j 6=i(xi xj) 1 X xi =: X 1in ai X xi . it is possible to integrate Equation (4) in P 1in ai log(y(t) xi) = s d + c to finally get Y 1in (y(t) xi) ai = C exp( s d ), and since y(0) = 0, it must hold that C = Q 1in( xi) ai . As a consequence, y(1) solves: Y 1in (y(1) xi) ai = e1/d Y 1in ( xi) ai . Unfortunately, even for d = 3, the solution somehow simplifies but has no closed form; on the other hand, numerical computations indicate that the competitive ratio of GREEDY converges to 0.89 when d = 3 and N tends to infinity. We provide in Figure 3 the numerical solutions of the ODE for d-regular graphs (actually, we draw the functions 1 U (1 G(s)) that are more relevant) for various values of d; the end-point obtained at s = 1 indicates the relative performance of GREEDY. As expected, those functions are point-wise increasing with d (as the problem becomes simpler and simpler for GREEDY when d 2). 2.3.2 The Erdős-Rényi case. In a Erdős-Rényi graph, there is an edge between two vertices u 2 U and v 2 V with some probability p = cN , independently from each others. As N goes to infinity, the number of edges to a vertex follows (approximately) a Poisson law of parameter c > 1. As a consequence, we consider the configuration model where ⇡U and ⇡V are Poisson laws of parameter c, which yields µ = c, U (s) = ec(s 1). In this case, Equation (1) becomes: cG0(s) e cG(s) 1 e c e cG(s) = 1. The solutions are given by: G(s) = 1 c log ✓ c log(ek cs +1) ◆ , yielding X(1 G(s)) = 1 c log ek cs +1 . The initial condition U (1 G(0)) = U (1) = 1 gives ek = ec 1, from which we deduce that the number of matches of GREEDY is asymptotically proportional to 1 U (1 G(1)) = 1 log (2 e c) c , which recovers, as a sanity check, some existing results [Mastin and Jaillet, 2013]. 2.3.3 The comparison of different configuration models Using Gronwall’s Lemma, it is possible to show Theorem 1 can be used to compare different configuration models, as in the following Corollary. Corollary 1. Consider two configuration models CM1(dU1 ,d V 1 ) and CM2(d U 2 ,d V 2 ), s.t. d U 1 and dU1 are both drawn i.i.d. from ⇡U , d V 1 is drawn i.i.d. from ⇡ 1 V and d V 2 is drawn i.i.d. from ⇡2V , with P x x⇡ 1 V (x) = P x x⇡ 2 V (x). If 1 V (s) 2 V (s) for any s 2 (0, 1), then by denoting respectively 1 and 2 the asymptotic proportion of vertices matched by GREEDY in CM1(dU1 ,d V 1 ) and CM2(dU2 ,d V 2 ), it holds that necessarily 2 1. For instance, let us assume that the degree distribution on the offline side is fixed. Then the matching size obtained by GREEDY is asymptotically larger if vertices on the online side all have exactly the same degree d rather than if those degrees are drawn from a Poisson distribution with expectation d. A similar result (with a different criterion) holds with fixed degree distribution on the online side and differing one on the offline side. 2.4 GREEDY can outperform RANKING ! We recall that the RANKING algorithm, which is the worse case optimal, chooses at random a ranking over U and uses it to break ties (i.e., if two vertices u and u0 can be matched to vk, then it is the one with the smallest rank that is matched by RANKING). Quite surprisingly, we get that in the configuration model RANKING can have a worse competitive ratio than GREEDY, which advocates again for its thorough study. Proposition 1. Let R and G be the assymptotic performances of RANKING and GREEDY on the 2-regular graph. The following holds: G > R. In other words, GREEDY outperforms RANKING in the 2-regular graph. We conjecture that the above result actually holds for any d 2, and more generally for a wide class of distributions ⇡U and ⇡V (finding a general criterion would be very interesting). The proof of Proposition 1 is provided in Appendix G. The main idea is that in the 2-regular graph, RANKING is biased towards selecting as matches vertices with two remaining half-edges rather than just one. Indeed, vertices with only one remaining half-edge were not selected previously and thus have a higher rank. The vertices with only one remaining half-edge will not get matched in the subsequent iterations, so not picking them as matches is suboptimal. On the other hand, GREEDY picks any match uniformly at random and does not exhibit such bias. 3 Ideas of proof of Theorem 1 The main idea behind the proof of Theorem 1 (postponed to Section D) is to show that the random deterministic evolution of the matching size generated by GREEDY is closely related to the solution of some ODE (this is sometimes called “the differential equation method” [Wormald, 1995] or “stochastic approximations” [Robbins and Monro, 1951]). Computing the solution of the ODE is easier - if not explicitly, at least numerically in intricate cases - than estimating the performances of GREEDY by Monte-Carlo simulations and it provides qualitative, as well as quantitative, properties. Tracking the matching size is non-trivial because the vertices (in U and V) have different degrees, hence some of them are more likely to be matched than others. However, in the configuration model, each vertex has the same distribution of degrees before the sequences dU and dV are fixed. As a consequence, the proof relies on the three following techniques 1. The graph is built sequentially, along with the matching and not beforehand (fixing the ”randomness” at the beginning would be very difficult to handle in the analysis). Thankfully, this does not change the law of the graph generated (this is obviously crucial). 2. We are not only going to track the size of the matching built as we need to handle different probabilities of matching (and pairing the graph) for each vertex. As a consequence, we are going to track the numbers of non-matched vertices which have still i half-edges to be paired and the number of already matched vertices that have j half-edges remaining. This will give one different ODE per value of i of j. Since ⇡U and ⇡V are sub-Gaussian, we will prove that with arbitrarily high probability - exponential in N -, there are only a polynomial number of such equations 3. All those differential equations are then “aggregated” to build the final ODE satisfied by the matching size. Interestingly, this aggregated ODE has a simple form, while the full system is on the other hand quite intricate. In the following sub-sections, we separate the proofs in the different building blocks to provide intuitions; the proof of technical lemmas are deferred to the appendix. 3.1 Building the graph together with the matching The first step in the analysis is to notice that the bipartite configuration model can be constructed by sequentially pairing the half-edges coming from V . The matching generated by GREEDY is then constructed simultaneously with the graph. More precisely, given two sequences5 of nonnegative integers dU = (dU1 , . . . , dUN ) and d V [ {dVT+1} = (d V 1 , . . . , d V T , d V T+1), we introduce in the following a generating algorithm that simultaneously build the associated bipartite configuration model CM(dU ,dV) together with GREEDY. Recall that the bipartite configuration model is obtained through a uniform matching between the half-edges of U and the half-edges of V . To avoid confusion, we will call a marked matching a pairing of two half-edges that corresponds to an edge that will belong to the constructed matching M. This construction pseudo-code is detailed in Algorithm 1. Algorithm 1: GREEDY MATCHING CONFIGURATION MODEL WITHOUT CAPACITIES Input: dU = (dU1 , . . . , d U N ) and d V = (dV1 , . . . , d V T ) Initialization. M0 ;, E0 ; and HU0 { half-edges of U} for t = 1, . . . , T do Order uniformly at random the edges emanating from vt: et1, . . . , etkt for i = 1, . . . , kt do Choose uniformly an half-edge eUi in HU E E [ {u(eUi ), vt} // Create an edge between e t i and e U i HU HU \ {eUi } // Remove the half-edge if vt and u(eUi ) unmatched then Mt Mt 1 [ {u(eUi ), vt} // vt is matched end end end CM(dU ,dV) (U ,V, E). Output: Bipartite configuration model CM(dU ,dV) and matching MT on it. Since each pairing of each half-edge is done uniformly at random, the graph obtained at the end of the algorithm has indeed the law of a bipartite configuration model. Moreover, it is easy to see that M corresponds to the matching constructed by GREEDY MATCHING on CM(dU ,dV). 3.2 Differential Equation Method - Stochastic Approximation As mentioned above, several quantities are going to be tracked through time: for all k 2 {0, . . . , T} and all i 0, we define: • Fi(k) as the number of vertices u 2 U that are not yet matched at the end of step k and whose remaining degree is i, meaning that du i of their initial half-edges have been paired. We will refer them to as free vertices. • Mi(k) as the number of vertices u 2 U already matched at the end of step k and whose remaining degree is i. We will refer them to as marked vertices. Notice that for all 0 k T , the sum Fi(k) +Mi(k) corresponds to the total number of vertices of U with remaining degree i at the end of step k. We also define • bF (k) := P i 0 iFi(k) is the number of available half-edges attached to free vertices at the end of step k, 5Without loss of generality, we assume that the additional extra vertex is always on the V side. • cM(k) := P i 0 iMi(k) is the number of available half-edges attached to marked vertices at the end of step k. We are going to study the evolution of these quantities along with the one of GREEDY. A major ingredient of the proof is to show that Fi(k) and Mi(k) closely follow the solutions of some ODE. This is the so-called differential equation method [Wormald, 1995], stated in Appendix C. For instance, it can easily be seen that bF (k) + cM(k) closely follows the function t 7! µU tµV on (0, µU/µV) in the following sense. Lemma 1. For every " > 0, and for all 0 k T , bF (k) + cM(k) N µU k N µV ⌘ ". with probability at least 1 exp N✏2 2 2U + exp T ✏2 2 2V . We now turn to each individual quantity Fi (resp. Mi). We can prove a similar result, yet the limit function is not explicit (unlike for the matching size as in Theorem 1 statement). The following Lemma 2 states that the discrete sequences of (free and marked) half-edges are closely related to the solutions of some system of differential equations. Before stating it, we first introduce, for any sequence of non-negative numbers (x`)` 0 and (y`)` 0 such that 0 < P ` `(x` + y`) <1, every i 0, the following mappings i(x0, x1, . . . , y0, y1, . . .) := iµVxi + (i+ 1)µVxi+1 h ⇣ P ` 0 `y`P ` 0 `(x`+y`) ⌘ (i+ 1)xi+1 P ` 0 `(x` + y`) (5) and i(x0, x1, . . . , y0, y1, . . .) := iµVyi + (i+ 1)µVyi+1 + h ⇣ P ` 0 `y`P ` 0 `(x`+y`) ⌘ (i+ 1)xi+1 P ` 0 `(x` + y`) , where h is the following function, well-defined on [0, 1], h(s) = 1 V(s) 1 s . Lemma 2. With probability 1 ⇣N exp( ⇠N c/2), there are at most N c quantities Fi and Mi, and for all 0 k T and all i 0 Fi(k) N fi ✓ k N ◆ N 2c and Mi(k) N mi ✓ k N ◆ N 2c, where ⇣, depend only on the (first two) moments of ⇡V and ⇡U and c = 1/20. The continuous mappings fi and mi are solutions of the system of differential equations on [0, µU/µV) dfi dt = i(f0, f1, . . . ,m0,m1, . . .), dmi dt = i(f0, f1, . . . ,m0,m1, . . .), fi(0) = ⇡U (i), mi(0) = 0. (6) This system is well defined as stated by the following Lemma 3. Lemma 3. The system (6) has a unique solution which is well-defined on [0, µU/µV). More precisely, denoting by f and m the generating series of the sequences (fi)i 0 and (mi)i 0, f(t, s) = X i 0 fi(t)s i and m(t, s) = X i 0 mi(t)s i, it holds that: f ✓ µU µV 1 e µV t , s ◆ = U (s 1)e µV t + 1 F (t) , (7) and m ✓ µU µV 1 e µV t , s ◆ = ˆ t 0 F 0(u) 0U (s 1)e µVu + 1 F (u) du. where F is a solution of the following ODE 1 µU 0U (1 F (t)) 1 V ⇣ 1 1µU 0 U (1 F (t)) ⌘F 0(t) = e µV t . 3.3 Aggregating solutions to compute GREEDY performances To get Theorem 1, notice that the number of vertices matched by GREEDY is N minus the number of free vertices remaining at the end, which is approximately equal to Nf(µUµV , 1) by definition of f and because of Lemma 2. This corresponds to t = +1 in Equation (7), thus the performance of GREEDY is, with arbitrarily high probability, arbitrarily close to N(1 U (1 F (+1))) The statement of Theorem 1 just follows from a simple final change of variable. Conclusion We studied theoretical performances of GREEDY algorithm on matching problems with different underlying structures. Those precise results are quite interesting and raise many questions, especially since GREEDY actually outperforms RANKING in many different situations (in theory for 2-regular graphs, but empirical evidence indicates that this happens more generically). Our approach has also successfully been used to unveil some questions on the comparison between different possible models. But more general questions are still open; for instance, assuming that the expected degree is fixed, which situation is the more favorable to GREEDY and online algorithm: small or high variance, or more generally this distribution ⇡U or an alternative one ⇡0U ? The obvious technique would be to compare the solution of the different associated ODE’s. Similarly, the questions of stability/robustness of the solution to variation in the distribution ⇡U and ⇡V are quite challenging and left for future work. We believe online matching will become an important problem for the machine learning community in the future. Each year, the complexity of the underlying graphs increases and we are considering adding features to the model in future work (such as random variables on the edges, modeling the interest for a consumer for a given product), or connection modeled via some Kernel between vertices features (say, if users and products/campaigns are embedded in the same space). In this context, machine learning tools will certainly be needed to tackle the problem. Acknowledgments and Disclosure of Funding V. Perchet acknowledges support from the ANR under grant number #ANR-19-CE23-0026 as well as the support grant as part of the Investissement d’avenir project, reference LabEx Ecodec/ANR11-LABX-0047. Nathan Noiry also acknowledges support from the Telecom Paris DSAIDIS chair and from the ANR ProGraM (ANR-19-CE40-0025). Flore Sentenac is supported by IP PARIS’ PhD Funding.
1. What is the focus of the paper regarding online bipartite matching? 2. What are the strengths of the proposed approach, particularly in terms of its performance guarantees? 3. Do you have any concerns or suggestions regarding the paper's clarity and terminology usage? 4. Have you checked the proof of the main claim, and what are your thoughts on its technical soundness? 5. What are some minor typos that can be improved in the paper?
Summary Of The Paper Review
Summary Of The Paper The paper studies the performance of a greedy algorithm for online bipartite matching under the configuration model. In this model the vertex degrees of the input graph adhere to a fixed distribution. The authors estimate the competitive ratio of the greedy algorithm and show that in this model the greedy algorithm achieves better performance guarantees than related methods. Review Originality/Significance: The authors present an interesting result in the area of online matching under a stochastic model. It has to be said that I am not an expert in this area, but it seems both the result and the proof technique are original and should be of interest to the community. Quality/Clarity: The paper is mostly clear. At times the authors use terminology that could be improved, such as: "subGaussian" (line 113) -- What do you mean by that? "half-edges" -- requires definition "multiple edges" (line 158) -- Should be replaced by "parallel edges" (the standard terminology) The statement in Proposition 1 should be formalized. I did not have time to check the proof of the main claim. From a presentation point of view the paper should meet the bar of acceptance, given the authors improve on the issues pointed out above. Typos: N instead of n (line 164) *reason (line 69) "worst" -> "worse" (line 214)
NIPS
Title Online Matching in Sparse Random Graphs: Non-Asymptotic Performances of Greedy Algorithm Abstract Motivated by sequential budgeted allocation problems, we investigate online matching problems where connections between vertices are not i.i.d., but they have fixed degree distributions – the so-called configuration model. We estimate the competitive ratio of the simplest algorithm, GREEDY, by approximating some relevant stochastic discrete processes by their continuous counterparts, which are solutions of an explicit system of partial differential equations. This technique gives precise bounds on the estimation errors, with arbitrarily high probability as the problem size increases. In particular, it allows the formal comparison between different configuration models. We also prove that, quite surprisingly, GREEDYcan have better performance guarantees than RANKING, another celebrated algorithm for online matching that usually outperforms the former. 1 Introduction Finding matchings in bipartite graphs (U[V , E), where E ⇢ U⇥V is a set of edges, is a long-standing problem with different motivations and approaches [Godsil, 1981, Zdeborová and Mézard, 2006, Lovász and Plummer, 2009, Bordenave et al., 2013]. If U is seen as a set of resources and V as demands, the objective is to allocate as many resources to demands (an allocation - or a matching - between u and v is admissible if (u, v) 2 E) with the constraint that a resource is allocated to only one demand and vice-versa. Motivated particularly by practical applications to Internet advertising, the online variant of this problem is receiving increasing attention (we refer to the excellent survey [Mehta, 2012] for more applications, specific settings, results and techniques). In this case, the set of vertices U is present at the beginning and the graph unveils sequentially: vertices v 2 V are observed sequentially, one after the other, along with the edges they belong to. An online algorithm must decide, right after observing vk and its associated set of edges Ek := {(u, vk) 2 E} to match it to some other vertex u 2 U , at the conditions that (u, vk) 2 Ek and u 2 U has not been matched yet. The performance of an online algorithm is evaluated by its competitive ratio, which is the ratio between the size of the matching it has created and the highest possible matching in hindsight [Feldman et al., 2009]. This theoretical setting is particularly well suited for online advertising: U is the set of campaigns/ads that an advertiser can run and users v1, v2, . . . , vT arrive sequentially [Mehta, 2012, Manshadi et al., 2012]. Some of them are eligible for a large subset of campaigns, others are not (usually based 35th Conference on Neural Information Processing Systems (NeurIPS 2021). on their attributes/features, such as the geographic localization, the browsing history, or any other relevant information). The objective of an advertiser (in this over-simplified model) is to maximize the number of displayed ads. In practice, campaigns/ads are not displayed only once but have a maximal budget of impressions (say, a specific ad can be displayed only 10.000 times each day). A possible trick consists of duplicating the vertices of U as many times as the budget. However, this results in strong and undesirable correlations between vertices. It is, therefore, more appropriate to consider a bipartite graph with capacities and admissible matchings as subsets of edges such that each vertex belongs to several different edges, but not more than their associated capacities ! 2 N (a vertex v 2 V is matched once while u 2 U can be matched !u times). This online matching problem with capacities has been quite extensively studied. It is known that GREEDY, which matches all incoming vertices to any available neighbor has a competitive ratio of 1/2 in the worst case, albeit it achieves 1 1/e as soon as the incoming vertices arrive in Random Order [Goel and Mehta, 2008b]. The worst-case optimal algorithm is the celebrated RANKING, which achieves 1 1/e on any instance [Karp et al., 1990, Devanur et al., 2013, Birnbaum and Mathieu, 2008], and also has better guarantees in the Random Order setting [Mahdian and Yan, 2011]. Beyond the adversarial setting, the following stochastic setting has been considered: there exist a finite set of L “base” vertices v(1), . . . , v(L) associated to base edge-sets E(1), . . . , E(L). When a vertex vk arrives, its type ✓k 2 {1, . . . , L} is drawn iid from some distribution (either known beforehand or not) and then its edge set is set as Ek = E(✓k). In the context where the distribution is known, algorithms with much better competitive ratios than GREEDY or RANKING were designed [Manshadi et al., 2012, Jaillet and Lu, 2014, Brubach et al., 2019], specifically with a competitive ratio of 1 2/e2 when the expected number of arrival of each type are integral and 0.706 without this assumption. Notably, those competitive ratios still hold with Poisson arrival rates rather than a fixed number of arrivals. On a side note, a vast line of work considers online matching in weighted graphs [Devanur et al., 2012, Goel and Mehta, 2008a, Mehta, 2012], which is outside the scope of this paper. However, it is still worth noting that the unweighted graph is a weighted graph with all weights equal. This model of the stochastic setting is quite interesting but rather strong: it lacks flexibility and cannot be used to represent some challenging instances ( for example when the degrees of each vertex U increase linearly with the number of vertices in V , or when the set U of campaigns must be fixed so that the model is well specified, etc...). Another tentative is to consider Erdős-Rényi graphs assuming that each possible edge is present in U ⇥V with some fixed probability and independently of the other edges (see [Mastin and Jaillet, 2013]). The most interesting and challenging setting corresponds to the so-called sparse regime where each vertex of U has an expected degree independent of the size n of V , which amounts to take a probability of connection equal to c/n. Interestingly enough, even the analysis of the simplest GREEDY algorithm is quite challenging and already insightful in those models [Borodin et al., 2018, Arnosti, 2019, Dyer et al., 1993, Mastin and Jaillet, 2013]. Unfortunately, although this Erdős-Rényi model is compatible with growing sets U and V , it also turns out to be quite restrictive. The main reason is that the approximate Poisson degree distribution of the vertices has light-tail and does not allow for the appearance of the so-called scale-free property satisfied by many real-world networks [Barabási et al., 2000, Van Der Hofstad, 2016]. We, therefore, consider a more appropriate random graphs generation process called configuration model, introduced by [Bender and Canfield, 1978] and [Bollobás, 1980]. The optimal matching of this model has been computed in [Bordenave et al., 2013]. The configuration model is particularly well suited to handle different situations such as the following one. Assume that campaigns can either be “intensive” (with many eligible users) or “selective/light” (few eligible users), with an empirical proportion of, say, 20%/80%. Then whether an advertiser handles 100 campaigns at the same time or 10.000, it will always have roughly this proportion of intensive vs. light campaigns. Similarly, some users are more valuable than others, and are thus eligible for more campaigns than the others; the proportion of each type being independent of the total population size. The configuration model accommodates these observations by basically drawing iid degrees for vertices U and V (accordingly to some different unknown distributions for U and V) and then by finding a graph such that those degrees distribution are satisfied (up to negligible errors); as a consequence, the graphs generated are sparse, in the sense that the number of edges grows linearly with the number of vertices. Additionally, the configuration model is a well-suited random graph model which mimics a number of properties of real-world complex networks, while being analytically tractable. For instance, choosing power-law distributions for the degrees allows to obtain the so-called scale-free property (often observed in practice, as highlighted for the web by Faloutsos et al. [1999]). The configuration model also displays the so called “small-world phenomenon” (observed for instance in the graph of Facebook by Backstrom et al. [2012]) as its diameter is of logarithmic order. Main contribution We investigate the performances (in terms of expected competitive ratio) of the GREEDY matching algorithm in configuration models and we provide explicit quantitative results using stochastic approximation techniques [Wormald, 1995]; we prove that the increasing size of the random matching created is arbitrarily close to the solution of some explicit ODE. Solving the latter then gives in turn the solution to the original problem. The remaining of the paper is organized as follows. Section 2 describes precisely the problem and Theorem 1 is our first main result: it describes the performances of GREEDY in the capacity-less problem. The proof of Theorem 1 is delayed to Appendix D, but the main ideas and intuitions are provided in Section 3. The online matching with capacities problem is treated in Appendix A. 2 Online Matching Problems; Models and main result Consider a bipartite graph with capacities G = (U ,V, E ,!) where U = {1, . . . , N} and V = {1, . . . , T} are two finite set of vertices, E ⇢ (u, v), u 2 U , v 2 V is the set of edges and ! : U ! N⇤ is a capacity function. A matching M on G is a subset of edges e 2 E such that any vertex v 2 V is the endpoint of at most one edge e 2 M and any vertex u 2 U is the endpoint of at most !u edges in M . We will denote by M the set of matchings on G; the optimal matching M⇤ 2M is the one (or any one) with the highest cardinality, denoted by |M⇤|. The batched matching problem consists in finding any optimal matching M⇤ given a graph with capacities G; the online variant might be a bit more challenging, as the matching is constructed sequentially. Formally, the set of vertices U and their capacities ! are known from the start, and vertices v 2 V arrive sequentially (with the edges they belong to) and M0 = ;. At stage t 2 N – assuming a matching Mt 1 has been constructed –, a decision maker observes a new vertex1 vt and its associated set of edges {(u, vt);u 2 E}. If possible, one of these edges (ut, vt) is added to Mt 1, with the constraint that Mt = Mt 1 [ {(ut, vt)} is still a matching. The objective is to maximize the size of the constructed matching MT . The classical way to evaluate the performances of an algorithm is the competitive ratio, defined as |MT |/|M⇤| 2 [0, 1] (the higher the better). 2.1 Structured online matching via Configuration Model As mentioned before, the online matching problem can be quite difficult without additional structure. We will therefore assume that the vertex degrees in U and V have (at least asymptotically in N and T ) some given subGaussian2 distributions ⇡U and ⇡V , of respective expectation µU and µV and respective proxy-variance 2U and 2 V . Those numbers are related in the sense that we assume 3 that T = µUµV N 2 N. Given those degree distributions, the graphs we consider are random draws from a bipartite configuration model described below; for the sake of clarity, we first consider the capacity-less case (when !u = 1 for all u 2 U). Given ⇡U and ⇡V and N,T 1, let dU1 , . . . , dUN 2 N i.i.d. ⇠ ⇡U and dV1 , . . . , dVT 2 N i.i.d. ⇠ ⇡V be independent random variables; intuitively, those numbers are respectively the number of half-edges attached to vertex in U and V . Consider also two extra random variables dVT+1 = max NX i=1 dUi TX j=1 dVj , 0 and dUN+1 = max TX j=1 dVj NX i=1 dUi , 0 1Although the order of arrival is irrelevant to the models we studied, it could have an impact on other models. 2X is subGaussian with proxy-variance 2 if for any s 2 R,E[exp(sX)] exp ⇣ 2s2 2 ⌘ . Actually, we only need that ⇡U and ⇡V have some finite moment of order > 2. 3In the general case, consider T = bNµU/µVc. The proof is identical, up to a negligible 1/N error term so that equality between total degrees holds, i.e., PN+1 i=1 d U i = PT+1 j=1 d V j . Finally, a random (capacity- less) bipartite graph denoted by CM(dU ,dV ) is constructed with a uniform pairing of half-edges of U [ {N + 1} with half-edges of V [ {T + 1} and removing vertices T + 1 and N + 1 and their associated edges. These two artificially added vertices are just here to define a pairing between half-edges. Notice that, by the law of large numbers and since T = (µU/µV)N , dVT+1 = o(N) and dUN+1 = o(N) almost surely 4. The bipartite configuration model CM(dU ,dV) is then the random graph obtained by a uniform matching between the half-edges of U and the half-edges of V , where the random sequences dU = (dUi )i and dV = (dVj )j are defined as above. 2.2 Competitive ratio of GREEDY algorithm. Main result The first question to investigate in this structured setting is the computation of the (expected) competitive ratio of the simple algorithm GREEDY. It constructs a matching by sequentially adding any admissible edge uniformly at random. Describing it and stating our results require the following additional notations: for any e = (u, v) 2 E, u(e) = u (resp. v(e) = v) is the extremity of e in U (resp. V); the generating series of ⇡U and ⇡V are denoted by U and V and are defined as U (s) := X k 0 ⇡U (k)s k and V(s) := X k 0 ⇡V(k)s k. Our first main theorem, stated below, identifies the asymptotic size of the matching generated by GREEDY on the bipartite configuration model we have just defined. As the batched problem (i.e., computing the size of the optimal matching M⇤) is well understood [Bordenave et al., 2013], this quantity is sufficient to derive competitive ratios. Again, for the sake of presentation, we first assume that all capacities are fixed, equal to one; the general case is presented in Appendix A. Theorem 1. (Performances of GREEDY in the capacity-less case) Given N 1 and T = µUµV N , let MT be the matching built by GREEDY on CM(d U ,dV ) then the following convergence in probability holds: |MT | N P ! N!+1 1 U (1 G(1)). where G is the unique solution of the following ordinary differential equation: G0(s) = 1 V ⇣ 1 1µU 0 U (1 G(s)) ⌘ µV µU 0U (1 G(s)) ; G(0) = 0. (1) Moreover, for any s 2 [0, 1], if MT (s) is the matching obtained by GREEDY after seeing a proportion s of vertices of V , then |MT (s)| N P ! N!+1 1 U (1 G(s)). (2) Convergence rates are explicit; with probability exponentially large, at least 1 ⇣N exp( ⇠N c/2), sup s2[0,1] |MT (s)| N 1 U (1 G(s)) N c, where ⇣, ⇠, depend only on the (first two) moments of both ⇡V and ⇡U , and c is some universal constant (set arbitrarily as 1/20 in the proof). Theorem 1 generalizes to the case with capacities, see Sections A.1 and A.2. The details of the proof of Theorem 1 are postponed to Appendix D, but the main ideas are given in Section 3. 2.3 Examples, Instantiations and Corollaries We provide in this section some interesting examples and corollaries that illustrate the powerfulness of Theorem 1, and how it can be used to compare different situations. 4And even O( p N) with probability exponentially large in N as both distributions are sub-Gaussian. So the effects of those additional vertices can be neglected. 2.3.1 d-regular graphs The first typical example of random graphs are “ d-regular ”, for some d 2 N, i.e., graphs such that each vertex has an exact degree of d (to avoid trivial examples, we obviously assume d 2). It is non-trivial to sample a d-regular graph at random, yet it is easy to generate random graphs GN with the configuration model described above, with the specific choices of ⇡U = ⇡V = d, the Dirac mass at d. The downside is that GN is not exactly a d-regular bipartite random graph (as some vertices might be connected more than once, i.e., there might exist parallel edges). However, conditioned to be simple, i.e, without multiple edges and loops, it has the law of a uniform d-regular bipartite random graph. Moreover, the probability of being simple is bounded away from 0 [Van Der Hofstad, 2016]; as a consequence, any property holding with probability tending to 1 for GN , holds with probability tending to 1 for uniform d-regular bipartite random graphs. Finally, we also mention that Hall’s Theorem [Frieze and Karoński, 2016] implies that GN admits a perfect matching, so that |M⇤| = N . Instantiating Equation (1) to d-regular graphs yields that the competitive ratio of GREEDY converges, with probability 1, to 1 (1 G(1))d where G is the solution of the following ODE (1 G(s))d 1 1 (1 (1 G(s))d 1)d G0(s) = 1 d . (3) As expected, had we taken d = 1, then G(s) = s hence the competitive ratio of GREEDY is 1 (but again, d = 1-regular graphs are trivial). More interestingly, if d = 2, the ODE has a closed form solution: G(s) = exp( s2 ) 1, so that the competitive ratio of GREEDY converges to 4 p e (e+ 3) ' 0.877 1 1e ' 0.632, where the latter is a standard bound of the competitive ratio of GREEDY (for general, non-regular graphs) [Mehta, 2012]. Solving Equation (3) In the general case d 3, even if Equation (3) does not have a closed form solution, it is still possible to provide some insights. Notice first that the polynomial P (X) = 1 (1 (1 X)d 1)d admits n := d(d 1) roots, among which there is 1 with multiplicity d 1. If X is another root, then 1 (1 X)d 1 d = 1 , 1 (1 X)d 1 = e ik⇡ d , k = 1, . . . , d 1. Therefore, (1 X)d 1 = 1 e ik⇡ d , which admits d 1 distinct solutions for each k = 1, . . . , d 1. The resulting n := (d 1)2 distinct complex, denoted x1, . . . , xn, are the roots of P (X)/(1 X)d 1, so the ODE reduces to: y0(t)Q 1in y(t) xi = 1 d . (4) Since the following trivially holds: 1Q 1in(X xi) = X 1in 1Q j 6=i(xi xj) 1 X xi =: X 1in ai X xi . it is possible to integrate Equation (4) in P 1in ai log(y(t) xi) = s d + c to finally get Y 1in (y(t) xi) ai = C exp( s d ), and since y(0) = 0, it must hold that C = Q 1in( xi) ai . As a consequence, y(1) solves: Y 1in (y(1) xi) ai = e1/d Y 1in ( xi) ai . Unfortunately, even for d = 3, the solution somehow simplifies but has no closed form; on the other hand, numerical computations indicate that the competitive ratio of GREEDY converges to 0.89 when d = 3 and N tends to infinity. We provide in Figure 3 the numerical solutions of the ODE for d-regular graphs (actually, we draw the functions 1 U (1 G(s)) that are more relevant) for various values of d; the end-point obtained at s = 1 indicates the relative performance of GREEDY. As expected, those functions are point-wise increasing with d (as the problem becomes simpler and simpler for GREEDY when d 2). 2.3.2 The Erdős-Rényi case. In a Erdős-Rényi graph, there is an edge between two vertices u 2 U and v 2 V with some probability p = cN , independently from each others. As N goes to infinity, the number of edges to a vertex follows (approximately) a Poisson law of parameter c > 1. As a consequence, we consider the configuration model where ⇡U and ⇡V are Poisson laws of parameter c, which yields µ = c, U (s) = ec(s 1). In this case, Equation (1) becomes: cG0(s) e cG(s) 1 e c e cG(s) = 1. The solutions are given by: G(s) = 1 c log ✓ c log(ek cs +1) ◆ , yielding X(1 G(s)) = 1 c log ek cs +1 . The initial condition U (1 G(0)) = U (1) = 1 gives ek = ec 1, from which we deduce that the number of matches of GREEDY is asymptotically proportional to 1 U (1 G(1)) = 1 log (2 e c) c , which recovers, as a sanity check, some existing results [Mastin and Jaillet, 2013]. 2.3.3 The comparison of different configuration models Using Gronwall’s Lemma, it is possible to show Theorem 1 can be used to compare different configuration models, as in the following Corollary. Corollary 1. Consider two configuration models CM1(dU1 ,d V 1 ) and CM2(d U 2 ,d V 2 ), s.t. d U 1 and dU1 are both drawn i.i.d. from ⇡U , d V 1 is drawn i.i.d. from ⇡ 1 V and d V 2 is drawn i.i.d. from ⇡2V , with P x x⇡ 1 V (x) = P x x⇡ 2 V (x). If 1 V (s) 2 V (s) for any s 2 (0, 1), then by denoting respectively 1 and 2 the asymptotic proportion of vertices matched by GREEDY in CM1(dU1 ,d V 1 ) and CM2(dU2 ,d V 2 ), it holds that necessarily 2 1. For instance, let us assume that the degree distribution on the offline side is fixed. Then the matching size obtained by GREEDY is asymptotically larger if vertices on the online side all have exactly the same degree d rather than if those degrees are drawn from a Poisson distribution with expectation d. A similar result (with a different criterion) holds with fixed degree distribution on the online side and differing one on the offline side. 2.4 GREEDY can outperform RANKING ! We recall that the RANKING algorithm, which is the worse case optimal, chooses at random a ranking over U and uses it to break ties (i.e., if two vertices u and u0 can be matched to vk, then it is the one with the smallest rank that is matched by RANKING). Quite surprisingly, we get that in the configuration model RANKING can have a worse competitive ratio than GREEDY, which advocates again for its thorough study. Proposition 1. Let R and G be the assymptotic performances of RANKING and GREEDY on the 2-regular graph. The following holds: G > R. In other words, GREEDY outperforms RANKING in the 2-regular graph. We conjecture that the above result actually holds for any d 2, and more generally for a wide class of distributions ⇡U and ⇡V (finding a general criterion would be very interesting). The proof of Proposition 1 is provided in Appendix G. The main idea is that in the 2-regular graph, RANKING is biased towards selecting as matches vertices with two remaining half-edges rather than just one. Indeed, vertices with only one remaining half-edge were not selected previously and thus have a higher rank. The vertices with only one remaining half-edge will not get matched in the subsequent iterations, so not picking them as matches is suboptimal. On the other hand, GREEDY picks any match uniformly at random and does not exhibit such bias. 3 Ideas of proof of Theorem 1 The main idea behind the proof of Theorem 1 (postponed to Section D) is to show that the random deterministic evolution of the matching size generated by GREEDY is closely related to the solution of some ODE (this is sometimes called “the differential equation method” [Wormald, 1995] or “stochastic approximations” [Robbins and Monro, 1951]). Computing the solution of the ODE is easier - if not explicitly, at least numerically in intricate cases - than estimating the performances of GREEDY by Monte-Carlo simulations and it provides qualitative, as well as quantitative, properties. Tracking the matching size is non-trivial because the vertices (in U and V) have different degrees, hence some of them are more likely to be matched than others. However, in the configuration model, each vertex has the same distribution of degrees before the sequences dU and dV are fixed. As a consequence, the proof relies on the three following techniques 1. The graph is built sequentially, along with the matching and not beforehand (fixing the ”randomness” at the beginning would be very difficult to handle in the analysis). Thankfully, this does not change the law of the graph generated (this is obviously crucial). 2. We are not only going to track the size of the matching built as we need to handle different probabilities of matching (and pairing the graph) for each vertex. As a consequence, we are going to track the numbers of non-matched vertices which have still i half-edges to be paired and the number of already matched vertices that have j half-edges remaining. This will give one different ODE per value of i of j. Since ⇡U and ⇡V are sub-Gaussian, we will prove that with arbitrarily high probability - exponential in N -, there are only a polynomial number of such equations 3. All those differential equations are then “aggregated” to build the final ODE satisfied by the matching size. Interestingly, this aggregated ODE has a simple form, while the full system is on the other hand quite intricate. In the following sub-sections, we separate the proofs in the different building blocks to provide intuitions; the proof of technical lemmas are deferred to the appendix. 3.1 Building the graph together with the matching The first step in the analysis is to notice that the bipartite configuration model can be constructed by sequentially pairing the half-edges coming from V . The matching generated by GREEDY is then constructed simultaneously with the graph. More precisely, given two sequences5 of nonnegative integers dU = (dU1 , . . . , dUN ) and d V [ {dVT+1} = (d V 1 , . . . , d V T , d V T+1), we introduce in the following a generating algorithm that simultaneously build the associated bipartite configuration model CM(dU ,dV) together with GREEDY. Recall that the bipartite configuration model is obtained through a uniform matching between the half-edges of U and the half-edges of V . To avoid confusion, we will call a marked matching a pairing of two half-edges that corresponds to an edge that will belong to the constructed matching M. This construction pseudo-code is detailed in Algorithm 1. Algorithm 1: GREEDY MATCHING CONFIGURATION MODEL WITHOUT CAPACITIES Input: dU = (dU1 , . . . , d U N ) and d V = (dV1 , . . . , d V T ) Initialization. M0 ;, E0 ; and HU0 { half-edges of U} for t = 1, . . . , T do Order uniformly at random the edges emanating from vt: et1, . . . , etkt for i = 1, . . . , kt do Choose uniformly an half-edge eUi in HU E E [ {u(eUi ), vt} // Create an edge between e t i and e U i HU HU \ {eUi } // Remove the half-edge if vt and u(eUi ) unmatched then Mt Mt 1 [ {u(eUi ), vt} // vt is matched end end end CM(dU ,dV) (U ,V, E). Output: Bipartite configuration model CM(dU ,dV) and matching MT on it. Since each pairing of each half-edge is done uniformly at random, the graph obtained at the end of the algorithm has indeed the law of a bipartite configuration model. Moreover, it is easy to see that M corresponds to the matching constructed by GREEDY MATCHING on CM(dU ,dV). 3.2 Differential Equation Method - Stochastic Approximation As mentioned above, several quantities are going to be tracked through time: for all k 2 {0, . . . , T} and all i 0, we define: • Fi(k) as the number of vertices u 2 U that are not yet matched at the end of step k and whose remaining degree is i, meaning that du i of their initial half-edges have been paired. We will refer them to as free vertices. • Mi(k) as the number of vertices u 2 U already matched at the end of step k and whose remaining degree is i. We will refer them to as marked vertices. Notice that for all 0 k T , the sum Fi(k) +Mi(k) corresponds to the total number of vertices of U with remaining degree i at the end of step k. We also define • bF (k) := P i 0 iFi(k) is the number of available half-edges attached to free vertices at the end of step k, 5Without loss of generality, we assume that the additional extra vertex is always on the V side. • cM(k) := P i 0 iMi(k) is the number of available half-edges attached to marked vertices at the end of step k. We are going to study the evolution of these quantities along with the one of GREEDY. A major ingredient of the proof is to show that Fi(k) and Mi(k) closely follow the solutions of some ODE. This is the so-called differential equation method [Wormald, 1995], stated in Appendix C. For instance, it can easily be seen that bF (k) + cM(k) closely follows the function t 7! µU tµV on (0, µU/µV) in the following sense. Lemma 1. For every " > 0, and for all 0 k T , bF (k) + cM(k) N µU k N µV ⌘ ". with probability at least 1 exp N✏2 2 2U + exp T ✏2 2 2V . We now turn to each individual quantity Fi (resp. Mi). We can prove a similar result, yet the limit function is not explicit (unlike for the matching size as in Theorem 1 statement). The following Lemma 2 states that the discrete sequences of (free and marked) half-edges are closely related to the solutions of some system of differential equations. Before stating it, we first introduce, for any sequence of non-negative numbers (x`)` 0 and (y`)` 0 such that 0 < P ` `(x` + y`) <1, every i 0, the following mappings i(x0, x1, . . . , y0, y1, . . .) := iµVxi + (i+ 1)µVxi+1 h ⇣ P ` 0 `y`P ` 0 `(x`+y`) ⌘ (i+ 1)xi+1 P ` 0 `(x` + y`) (5) and i(x0, x1, . . . , y0, y1, . . .) := iµVyi + (i+ 1)µVyi+1 + h ⇣ P ` 0 `y`P ` 0 `(x`+y`) ⌘ (i+ 1)xi+1 P ` 0 `(x` + y`) , where h is the following function, well-defined on [0, 1], h(s) = 1 V(s) 1 s . Lemma 2. With probability 1 ⇣N exp( ⇠N c/2), there are at most N c quantities Fi and Mi, and for all 0 k T and all i 0 Fi(k) N fi ✓ k N ◆ N 2c and Mi(k) N mi ✓ k N ◆ N 2c, where ⇣, depend only on the (first two) moments of ⇡V and ⇡U and c = 1/20. The continuous mappings fi and mi are solutions of the system of differential equations on [0, µU/µV) dfi dt = i(f0, f1, . . . ,m0,m1, . . .), dmi dt = i(f0, f1, . . . ,m0,m1, . . .), fi(0) = ⇡U (i), mi(0) = 0. (6) This system is well defined as stated by the following Lemma 3. Lemma 3. The system (6) has a unique solution which is well-defined on [0, µU/µV). More precisely, denoting by f and m the generating series of the sequences (fi)i 0 and (mi)i 0, f(t, s) = X i 0 fi(t)s i and m(t, s) = X i 0 mi(t)s i, it holds that: f ✓ µU µV 1 e µV t , s ◆ = U (s 1)e µV t + 1 F (t) , (7) and m ✓ µU µV 1 e µV t , s ◆ = ˆ t 0 F 0(u) 0U (s 1)e µVu + 1 F (u) du. where F is a solution of the following ODE 1 µU 0U (1 F (t)) 1 V ⇣ 1 1µU 0 U (1 F (t)) ⌘F 0(t) = e µV t . 3.3 Aggregating solutions to compute GREEDY performances To get Theorem 1, notice that the number of vertices matched by GREEDY is N minus the number of free vertices remaining at the end, which is approximately equal to Nf(µUµV , 1) by definition of f and because of Lemma 2. This corresponds to t = +1 in Equation (7), thus the performance of GREEDY is, with arbitrarily high probability, arbitrarily close to N(1 U (1 F (+1))) The statement of Theorem 1 just follows from a simple final change of variable. Conclusion We studied theoretical performances of GREEDY algorithm on matching problems with different underlying structures. Those precise results are quite interesting and raise many questions, especially since GREEDY actually outperforms RANKING in many different situations (in theory for 2-regular graphs, but empirical evidence indicates that this happens more generically). Our approach has also successfully been used to unveil some questions on the comparison between different possible models. But more general questions are still open; for instance, assuming that the expected degree is fixed, which situation is the more favorable to GREEDY and online algorithm: small or high variance, or more generally this distribution ⇡U or an alternative one ⇡0U ? The obvious technique would be to compare the solution of the different associated ODE’s. Similarly, the questions of stability/robustness of the solution to variation in the distribution ⇡U and ⇡V are quite challenging and left for future work. We believe online matching will become an important problem for the machine learning community in the future. Each year, the complexity of the underlying graphs increases and we are considering adding features to the model in future work (such as random variables on the edges, modeling the interest for a consumer for a given product), or connection modeled via some Kernel between vertices features (say, if users and products/campaigns are embedded in the same space). In this context, machine learning tools will certainly be needed to tackle the problem. Acknowledgments and Disclosure of Funding V. Perchet acknowledges support from the ANR under grant number #ANR-19-CE23-0026 as well as the support grant as part of the Investissement d’avenir project, reference LabEx Ecodec/ANR11-LABX-0047. Nathan Noiry also acknowledges support from the Telecom Paris DSAIDIS chair and from the ANR ProGraM (ANR-19-CE40-0025). Flore Sentenac is supported by IP PARIS’ PhD Funding.
1. What is the focus of the paper regarding online bipartite matching? 2. What is the contribution of the paper regarding the Greedy algorithm's competitive ratio? 3. How does the reviewer assess the significance and originality of the paper's content? 4. How does the reviewer evaluate the clarity of the paper's writing?
Summary Of The Paper Review
Summary Of The Paper The paper studies the online bipartite matching problem with capacities in random graphs that are generated by a stochastic process called the configuration model. The configuration model generates random graphs whose degree distributions are sub-Gaussian. The main contribution of the paper is an analysis of the competitive ratio achieved by the well-studied Greedy algorithm for online bipartite matching. The main result relates the competitive ratio to the solution of a certain ODE. The ODE can either be solved in closed form for some special cases of interest or it can be solved numerically to obtain estimates of the competitive ratio. As a corollary of the main result, the paper derives competitive ratios for several special cases, including random d-regular graphs and Erdos-Renyi graphs. Review Significance: The paper addresses a fundamental problem in online algorithms, which is well studied and has applications in online advertising. The Greedy algorithm is also a well-studied algorithm in both the adversarial setting as well as stochastic settings. The current paper contributes to this line of work by analyzing Greedy in a more general random graph models, and relating its competitive ratio by relating it to an ODE that can be solved either numerically or in closed form in some settings. The questions addressed are mathematically interesting. On the negative side, the paper does not discuss how the mathematical results presented can benefit the machine learning community. The paper does not connect the random graph model considered to real-world instances. Overall, the paper seems to be a better fit for a TCS or Mathematics venue. Novelty/originality: The result appears to be novel and the approach could potentially lead to explicit competitive ratios for more general random instances. Clarity: The content in the main body is reasonably clear. I have not checked the correctness of the proofs included in the appendix.
NIPS
Title Online Matching in Sparse Random Graphs: Non-Asymptotic Performances of Greedy Algorithm Abstract Motivated by sequential budgeted allocation problems, we investigate online matching problems where connections between vertices are not i.i.d., but they have fixed degree distributions – the so-called configuration model. We estimate the competitive ratio of the simplest algorithm, GREEDY, by approximating some relevant stochastic discrete processes by their continuous counterparts, which are solutions of an explicit system of partial differential equations. This technique gives precise bounds on the estimation errors, with arbitrarily high probability as the problem size increases. In particular, it allows the formal comparison between different configuration models. We also prove that, quite surprisingly, GREEDYcan have better performance guarantees than RANKING, another celebrated algorithm for online matching that usually outperforms the former. 1 Introduction Finding matchings in bipartite graphs (U[V , E), where E ⇢ U⇥V is a set of edges, is a long-standing problem with different motivations and approaches [Godsil, 1981, Zdeborová and Mézard, 2006, Lovász and Plummer, 2009, Bordenave et al., 2013]. If U is seen as a set of resources and V as demands, the objective is to allocate as many resources to demands (an allocation - or a matching - between u and v is admissible if (u, v) 2 E) with the constraint that a resource is allocated to only one demand and vice-versa. Motivated particularly by practical applications to Internet advertising, the online variant of this problem is receiving increasing attention (we refer to the excellent survey [Mehta, 2012] for more applications, specific settings, results and techniques). In this case, the set of vertices U is present at the beginning and the graph unveils sequentially: vertices v 2 V are observed sequentially, one after the other, along with the edges they belong to. An online algorithm must decide, right after observing vk and its associated set of edges Ek := {(u, vk) 2 E} to match it to some other vertex u 2 U , at the conditions that (u, vk) 2 Ek and u 2 U has not been matched yet. The performance of an online algorithm is evaluated by its competitive ratio, which is the ratio between the size of the matching it has created and the highest possible matching in hindsight [Feldman et al., 2009]. This theoretical setting is particularly well suited for online advertising: U is the set of campaigns/ads that an advertiser can run and users v1, v2, . . . , vT arrive sequentially [Mehta, 2012, Manshadi et al., 2012]. Some of them are eligible for a large subset of campaigns, others are not (usually based 35th Conference on Neural Information Processing Systems (NeurIPS 2021). on their attributes/features, such as the geographic localization, the browsing history, or any other relevant information). The objective of an advertiser (in this over-simplified model) is to maximize the number of displayed ads. In practice, campaigns/ads are not displayed only once but have a maximal budget of impressions (say, a specific ad can be displayed only 10.000 times each day). A possible trick consists of duplicating the vertices of U as many times as the budget. However, this results in strong and undesirable correlations between vertices. It is, therefore, more appropriate to consider a bipartite graph with capacities and admissible matchings as subsets of edges such that each vertex belongs to several different edges, but not more than their associated capacities ! 2 N (a vertex v 2 V is matched once while u 2 U can be matched !u times). This online matching problem with capacities has been quite extensively studied. It is known that GREEDY, which matches all incoming vertices to any available neighbor has a competitive ratio of 1/2 in the worst case, albeit it achieves 1 1/e as soon as the incoming vertices arrive in Random Order [Goel and Mehta, 2008b]. The worst-case optimal algorithm is the celebrated RANKING, which achieves 1 1/e on any instance [Karp et al., 1990, Devanur et al., 2013, Birnbaum and Mathieu, 2008], and also has better guarantees in the Random Order setting [Mahdian and Yan, 2011]. Beyond the adversarial setting, the following stochastic setting has been considered: there exist a finite set of L “base” vertices v(1), . . . , v(L) associated to base edge-sets E(1), . . . , E(L). When a vertex vk arrives, its type ✓k 2 {1, . . . , L} is drawn iid from some distribution (either known beforehand or not) and then its edge set is set as Ek = E(✓k). In the context where the distribution is known, algorithms with much better competitive ratios than GREEDY or RANKING were designed [Manshadi et al., 2012, Jaillet and Lu, 2014, Brubach et al., 2019], specifically with a competitive ratio of 1 2/e2 when the expected number of arrival of each type are integral and 0.706 without this assumption. Notably, those competitive ratios still hold with Poisson arrival rates rather than a fixed number of arrivals. On a side note, a vast line of work considers online matching in weighted graphs [Devanur et al., 2012, Goel and Mehta, 2008a, Mehta, 2012], which is outside the scope of this paper. However, it is still worth noting that the unweighted graph is a weighted graph with all weights equal. This model of the stochastic setting is quite interesting but rather strong: it lacks flexibility and cannot be used to represent some challenging instances ( for example when the degrees of each vertex U increase linearly with the number of vertices in V , or when the set U of campaigns must be fixed so that the model is well specified, etc...). Another tentative is to consider Erdős-Rényi graphs assuming that each possible edge is present in U ⇥V with some fixed probability and independently of the other edges (see [Mastin and Jaillet, 2013]). The most interesting and challenging setting corresponds to the so-called sparse regime where each vertex of U has an expected degree independent of the size n of V , which amounts to take a probability of connection equal to c/n. Interestingly enough, even the analysis of the simplest GREEDY algorithm is quite challenging and already insightful in those models [Borodin et al., 2018, Arnosti, 2019, Dyer et al., 1993, Mastin and Jaillet, 2013]. Unfortunately, although this Erdős-Rényi model is compatible with growing sets U and V , it also turns out to be quite restrictive. The main reason is that the approximate Poisson degree distribution of the vertices has light-tail and does not allow for the appearance of the so-called scale-free property satisfied by many real-world networks [Barabási et al., 2000, Van Der Hofstad, 2016]. We, therefore, consider a more appropriate random graphs generation process called configuration model, introduced by [Bender and Canfield, 1978] and [Bollobás, 1980]. The optimal matching of this model has been computed in [Bordenave et al., 2013]. The configuration model is particularly well suited to handle different situations such as the following one. Assume that campaigns can either be “intensive” (with many eligible users) or “selective/light” (few eligible users), with an empirical proportion of, say, 20%/80%. Then whether an advertiser handles 100 campaigns at the same time or 10.000, it will always have roughly this proportion of intensive vs. light campaigns. Similarly, some users are more valuable than others, and are thus eligible for more campaigns than the others; the proportion of each type being independent of the total population size. The configuration model accommodates these observations by basically drawing iid degrees for vertices U and V (accordingly to some different unknown distributions for U and V) and then by finding a graph such that those degrees distribution are satisfied (up to negligible errors); as a consequence, the graphs generated are sparse, in the sense that the number of edges grows linearly with the number of vertices. Additionally, the configuration model is a well-suited random graph model which mimics a number of properties of real-world complex networks, while being analytically tractable. For instance, choosing power-law distributions for the degrees allows to obtain the so-called scale-free property (often observed in practice, as highlighted for the web by Faloutsos et al. [1999]). The configuration model also displays the so called “small-world phenomenon” (observed for instance in the graph of Facebook by Backstrom et al. [2012]) as its diameter is of logarithmic order. Main contribution We investigate the performances (in terms of expected competitive ratio) of the GREEDY matching algorithm in configuration models and we provide explicit quantitative results using stochastic approximation techniques [Wormald, 1995]; we prove that the increasing size of the random matching created is arbitrarily close to the solution of some explicit ODE. Solving the latter then gives in turn the solution to the original problem. The remaining of the paper is organized as follows. Section 2 describes precisely the problem and Theorem 1 is our first main result: it describes the performances of GREEDY in the capacity-less problem. The proof of Theorem 1 is delayed to Appendix D, but the main ideas and intuitions are provided in Section 3. The online matching with capacities problem is treated in Appendix A. 2 Online Matching Problems; Models and main result Consider a bipartite graph with capacities G = (U ,V, E ,!) where U = {1, . . . , N} and V = {1, . . . , T} are two finite set of vertices, E ⇢ (u, v), u 2 U , v 2 V is the set of edges and ! : U ! N⇤ is a capacity function. A matching M on G is a subset of edges e 2 E such that any vertex v 2 V is the endpoint of at most one edge e 2 M and any vertex u 2 U is the endpoint of at most !u edges in M . We will denote by M the set of matchings on G; the optimal matching M⇤ 2M is the one (or any one) with the highest cardinality, denoted by |M⇤|. The batched matching problem consists in finding any optimal matching M⇤ given a graph with capacities G; the online variant might be a bit more challenging, as the matching is constructed sequentially. Formally, the set of vertices U and their capacities ! are known from the start, and vertices v 2 V arrive sequentially (with the edges they belong to) and M0 = ;. At stage t 2 N – assuming a matching Mt 1 has been constructed –, a decision maker observes a new vertex1 vt and its associated set of edges {(u, vt);u 2 E}. If possible, one of these edges (ut, vt) is added to Mt 1, with the constraint that Mt = Mt 1 [ {(ut, vt)} is still a matching. The objective is to maximize the size of the constructed matching MT . The classical way to evaluate the performances of an algorithm is the competitive ratio, defined as |MT |/|M⇤| 2 [0, 1] (the higher the better). 2.1 Structured online matching via Configuration Model As mentioned before, the online matching problem can be quite difficult without additional structure. We will therefore assume that the vertex degrees in U and V have (at least asymptotically in N and T ) some given subGaussian2 distributions ⇡U and ⇡V , of respective expectation µU and µV and respective proxy-variance 2U and 2 V . Those numbers are related in the sense that we assume 3 that T = µUµV N 2 N. Given those degree distributions, the graphs we consider are random draws from a bipartite configuration model described below; for the sake of clarity, we first consider the capacity-less case (when !u = 1 for all u 2 U). Given ⇡U and ⇡V and N,T 1, let dU1 , . . . , dUN 2 N i.i.d. ⇠ ⇡U and dV1 , . . . , dVT 2 N i.i.d. ⇠ ⇡V be independent random variables; intuitively, those numbers are respectively the number of half-edges attached to vertex in U and V . Consider also two extra random variables dVT+1 = max NX i=1 dUi TX j=1 dVj , 0 and dUN+1 = max TX j=1 dVj NX i=1 dUi , 0 1Although the order of arrival is irrelevant to the models we studied, it could have an impact on other models. 2X is subGaussian with proxy-variance 2 if for any s 2 R,E[exp(sX)] exp ⇣ 2s2 2 ⌘ . Actually, we only need that ⇡U and ⇡V have some finite moment of order > 2. 3In the general case, consider T = bNµU/µVc. The proof is identical, up to a negligible 1/N error term so that equality between total degrees holds, i.e., PN+1 i=1 d U i = PT+1 j=1 d V j . Finally, a random (capacity- less) bipartite graph denoted by CM(dU ,dV ) is constructed with a uniform pairing of half-edges of U [ {N + 1} with half-edges of V [ {T + 1} and removing vertices T + 1 and N + 1 and their associated edges. These two artificially added vertices are just here to define a pairing between half-edges. Notice that, by the law of large numbers and since T = (µU/µV)N , dVT+1 = o(N) and dUN+1 = o(N) almost surely 4. The bipartite configuration model CM(dU ,dV) is then the random graph obtained by a uniform matching between the half-edges of U and the half-edges of V , where the random sequences dU = (dUi )i and dV = (dVj )j are defined as above. 2.2 Competitive ratio of GREEDY algorithm. Main result The first question to investigate in this structured setting is the computation of the (expected) competitive ratio of the simple algorithm GREEDY. It constructs a matching by sequentially adding any admissible edge uniformly at random. Describing it and stating our results require the following additional notations: for any e = (u, v) 2 E, u(e) = u (resp. v(e) = v) is the extremity of e in U (resp. V); the generating series of ⇡U and ⇡V are denoted by U and V and are defined as U (s) := X k 0 ⇡U (k)s k and V(s) := X k 0 ⇡V(k)s k. Our first main theorem, stated below, identifies the asymptotic size of the matching generated by GREEDY on the bipartite configuration model we have just defined. As the batched problem (i.e., computing the size of the optimal matching M⇤) is well understood [Bordenave et al., 2013], this quantity is sufficient to derive competitive ratios. Again, for the sake of presentation, we first assume that all capacities are fixed, equal to one; the general case is presented in Appendix A. Theorem 1. (Performances of GREEDY in the capacity-less case) Given N 1 and T = µUµV N , let MT be the matching built by GREEDY on CM(d U ,dV ) then the following convergence in probability holds: |MT | N P ! N!+1 1 U (1 G(1)). where G is the unique solution of the following ordinary differential equation: G0(s) = 1 V ⇣ 1 1µU 0 U (1 G(s)) ⌘ µV µU 0U (1 G(s)) ; G(0) = 0. (1) Moreover, for any s 2 [0, 1], if MT (s) is the matching obtained by GREEDY after seeing a proportion s of vertices of V , then |MT (s)| N P ! N!+1 1 U (1 G(s)). (2) Convergence rates are explicit; with probability exponentially large, at least 1 ⇣N exp( ⇠N c/2), sup s2[0,1] |MT (s)| N 1 U (1 G(s)) N c, where ⇣, ⇠, depend only on the (first two) moments of both ⇡V and ⇡U , and c is some universal constant (set arbitrarily as 1/20 in the proof). Theorem 1 generalizes to the case with capacities, see Sections A.1 and A.2. The details of the proof of Theorem 1 are postponed to Appendix D, but the main ideas are given in Section 3. 2.3 Examples, Instantiations and Corollaries We provide in this section some interesting examples and corollaries that illustrate the powerfulness of Theorem 1, and how it can be used to compare different situations. 4And even O( p N) with probability exponentially large in N as both distributions are sub-Gaussian. So the effects of those additional vertices can be neglected. 2.3.1 d-regular graphs The first typical example of random graphs are “ d-regular ”, for some d 2 N, i.e., graphs such that each vertex has an exact degree of d (to avoid trivial examples, we obviously assume d 2). It is non-trivial to sample a d-regular graph at random, yet it is easy to generate random graphs GN with the configuration model described above, with the specific choices of ⇡U = ⇡V = d, the Dirac mass at d. The downside is that GN is not exactly a d-regular bipartite random graph (as some vertices might be connected more than once, i.e., there might exist parallel edges). However, conditioned to be simple, i.e, without multiple edges and loops, it has the law of a uniform d-regular bipartite random graph. Moreover, the probability of being simple is bounded away from 0 [Van Der Hofstad, 2016]; as a consequence, any property holding with probability tending to 1 for GN , holds with probability tending to 1 for uniform d-regular bipartite random graphs. Finally, we also mention that Hall’s Theorem [Frieze and Karoński, 2016] implies that GN admits a perfect matching, so that |M⇤| = N . Instantiating Equation (1) to d-regular graphs yields that the competitive ratio of GREEDY converges, with probability 1, to 1 (1 G(1))d where G is the solution of the following ODE (1 G(s))d 1 1 (1 (1 G(s))d 1)d G0(s) = 1 d . (3) As expected, had we taken d = 1, then G(s) = s hence the competitive ratio of GREEDY is 1 (but again, d = 1-regular graphs are trivial). More interestingly, if d = 2, the ODE has a closed form solution: G(s) = exp( s2 ) 1, so that the competitive ratio of GREEDY converges to 4 p e (e+ 3) ' 0.877 1 1e ' 0.632, where the latter is a standard bound of the competitive ratio of GREEDY (for general, non-regular graphs) [Mehta, 2012]. Solving Equation (3) In the general case d 3, even if Equation (3) does not have a closed form solution, it is still possible to provide some insights. Notice first that the polynomial P (X) = 1 (1 (1 X)d 1)d admits n := d(d 1) roots, among which there is 1 with multiplicity d 1. If X is another root, then 1 (1 X)d 1 d = 1 , 1 (1 X)d 1 = e ik⇡ d , k = 1, . . . , d 1. Therefore, (1 X)d 1 = 1 e ik⇡ d , which admits d 1 distinct solutions for each k = 1, . . . , d 1. The resulting n := (d 1)2 distinct complex, denoted x1, . . . , xn, are the roots of P (X)/(1 X)d 1, so the ODE reduces to: y0(t)Q 1in y(t) xi = 1 d . (4) Since the following trivially holds: 1Q 1in(X xi) = X 1in 1Q j 6=i(xi xj) 1 X xi =: X 1in ai X xi . it is possible to integrate Equation (4) in P 1in ai log(y(t) xi) = s d + c to finally get Y 1in (y(t) xi) ai = C exp( s d ), and since y(0) = 0, it must hold that C = Q 1in( xi) ai . As a consequence, y(1) solves: Y 1in (y(1) xi) ai = e1/d Y 1in ( xi) ai . Unfortunately, even for d = 3, the solution somehow simplifies but has no closed form; on the other hand, numerical computations indicate that the competitive ratio of GREEDY converges to 0.89 when d = 3 and N tends to infinity. We provide in Figure 3 the numerical solutions of the ODE for d-regular graphs (actually, we draw the functions 1 U (1 G(s)) that are more relevant) for various values of d; the end-point obtained at s = 1 indicates the relative performance of GREEDY. As expected, those functions are point-wise increasing with d (as the problem becomes simpler and simpler for GREEDY when d 2). 2.3.2 The Erdős-Rényi case. In a Erdős-Rényi graph, there is an edge between two vertices u 2 U and v 2 V with some probability p = cN , independently from each others. As N goes to infinity, the number of edges to a vertex follows (approximately) a Poisson law of parameter c > 1. As a consequence, we consider the configuration model where ⇡U and ⇡V are Poisson laws of parameter c, which yields µ = c, U (s) = ec(s 1). In this case, Equation (1) becomes: cG0(s) e cG(s) 1 e c e cG(s) = 1. The solutions are given by: G(s) = 1 c log ✓ c log(ek cs +1) ◆ , yielding X(1 G(s)) = 1 c log ek cs +1 . The initial condition U (1 G(0)) = U (1) = 1 gives ek = ec 1, from which we deduce that the number of matches of GREEDY is asymptotically proportional to 1 U (1 G(1)) = 1 log (2 e c) c , which recovers, as a sanity check, some existing results [Mastin and Jaillet, 2013]. 2.3.3 The comparison of different configuration models Using Gronwall’s Lemma, it is possible to show Theorem 1 can be used to compare different configuration models, as in the following Corollary. Corollary 1. Consider two configuration models CM1(dU1 ,d V 1 ) and CM2(d U 2 ,d V 2 ), s.t. d U 1 and dU1 are both drawn i.i.d. from ⇡U , d V 1 is drawn i.i.d. from ⇡ 1 V and d V 2 is drawn i.i.d. from ⇡2V , with P x x⇡ 1 V (x) = P x x⇡ 2 V (x). If 1 V (s) 2 V (s) for any s 2 (0, 1), then by denoting respectively 1 and 2 the asymptotic proportion of vertices matched by GREEDY in CM1(dU1 ,d V 1 ) and CM2(dU2 ,d V 2 ), it holds that necessarily 2 1. For instance, let us assume that the degree distribution on the offline side is fixed. Then the matching size obtained by GREEDY is asymptotically larger if vertices on the online side all have exactly the same degree d rather than if those degrees are drawn from a Poisson distribution with expectation d. A similar result (with a different criterion) holds with fixed degree distribution on the online side and differing one on the offline side. 2.4 GREEDY can outperform RANKING ! We recall that the RANKING algorithm, which is the worse case optimal, chooses at random a ranking over U and uses it to break ties (i.e., if two vertices u and u0 can be matched to vk, then it is the one with the smallest rank that is matched by RANKING). Quite surprisingly, we get that in the configuration model RANKING can have a worse competitive ratio than GREEDY, which advocates again for its thorough study. Proposition 1. Let R and G be the assymptotic performances of RANKING and GREEDY on the 2-regular graph. The following holds: G > R. In other words, GREEDY outperforms RANKING in the 2-regular graph. We conjecture that the above result actually holds for any d 2, and more generally for a wide class of distributions ⇡U and ⇡V (finding a general criterion would be very interesting). The proof of Proposition 1 is provided in Appendix G. The main idea is that in the 2-regular graph, RANKING is biased towards selecting as matches vertices with two remaining half-edges rather than just one. Indeed, vertices with only one remaining half-edge were not selected previously and thus have a higher rank. The vertices with only one remaining half-edge will not get matched in the subsequent iterations, so not picking them as matches is suboptimal. On the other hand, GREEDY picks any match uniformly at random and does not exhibit such bias. 3 Ideas of proof of Theorem 1 The main idea behind the proof of Theorem 1 (postponed to Section D) is to show that the random deterministic evolution of the matching size generated by GREEDY is closely related to the solution of some ODE (this is sometimes called “the differential equation method” [Wormald, 1995] or “stochastic approximations” [Robbins and Monro, 1951]). Computing the solution of the ODE is easier - if not explicitly, at least numerically in intricate cases - than estimating the performances of GREEDY by Monte-Carlo simulations and it provides qualitative, as well as quantitative, properties. Tracking the matching size is non-trivial because the vertices (in U and V) have different degrees, hence some of them are more likely to be matched than others. However, in the configuration model, each vertex has the same distribution of degrees before the sequences dU and dV are fixed. As a consequence, the proof relies on the three following techniques 1. The graph is built sequentially, along with the matching and not beforehand (fixing the ”randomness” at the beginning would be very difficult to handle in the analysis). Thankfully, this does not change the law of the graph generated (this is obviously crucial). 2. We are not only going to track the size of the matching built as we need to handle different probabilities of matching (and pairing the graph) for each vertex. As a consequence, we are going to track the numbers of non-matched vertices which have still i half-edges to be paired and the number of already matched vertices that have j half-edges remaining. This will give one different ODE per value of i of j. Since ⇡U and ⇡V are sub-Gaussian, we will prove that with arbitrarily high probability - exponential in N -, there are only a polynomial number of such equations 3. All those differential equations are then “aggregated” to build the final ODE satisfied by the matching size. Interestingly, this aggregated ODE has a simple form, while the full system is on the other hand quite intricate. In the following sub-sections, we separate the proofs in the different building blocks to provide intuitions; the proof of technical lemmas are deferred to the appendix. 3.1 Building the graph together with the matching The first step in the analysis is to notice that the bipartite configuration model can be constructed by sequentially pairing the half-edges coming from V . The matching generated by GREEDY is then constructed simultaneously with the graph. More precisely, given two sequences5 of nonnegative integers dU = (dU1 , . . . , dUN ) and d V [ {dVT+1} = (d V 1 , . . . , d V T , d V T+1), we introduce in the following a generating algorithm that simultaneously build the associated bipartite configuration model CM(dU ,dV) together with GREEDY. Recall that the bipartite configuration model is obtained through a uniform matching between the half-edges of U and the half-edges of V . To avoid confusion, we will call a marked matching a pairing of two half-edges that corresponds to an edge that will belong to the constructed matching M. This construction pseudo-code is detailed in Algorithm 1. Algorithm 1: GREEDY MATCHING CONFIGURATION MODEL WITHOUT CAPACITIES Input: dU = (dU1 , . . . , d U N ) and d V = (dV1 , . . . , d V T ) Initialization. M0 ;, E0 ; and HU0 { half-edges of U} for t = 1, . . . , T do Order uniformly at random the edges emanating from vt: et1, . . . , etkt for i = 1, . . . , kt do Choose uniformly an half-edge eUi in HU E E [ {u(eUi ), vt} // Create an edge between e t i and e U i HU HU \ {eUi } // Remove the half-edge if vt and u(eUi ) unmatched then Mt Mt 1 [ {u(eUi ), vt} // vt is matched end end end CM(dU ,dV) (U ,V, E). Output: Bipartite configuration model CM(dU ,dV) and matching MT on it. Since each pairing of each half-edge is done uniformly at random, the graph obtained at the end of the algorithm has indeed the law of a bipartite configuration model. Moreover, it is easy to see that M corresponds to the matching constructed by GREEDY MATCHING on CM(dU ,dV). 3.2 Differential Equation Method - Stochastic Approximation As mentioned above, several quantities are going to be tracked through time: for all k 2 {0, . . . , T} and all i 0, we define: • Fi(k) as the number of vertices u 2 U that are not yet matched at the end of step k and whose remaining degree is i, meaning that du i of their initial half-edges have been paired. We will refer them to as free vertices. • Mi(k) as the number of vertices u 2 U already matched at the end of step k and whose remaining degree is i. We will refer them to as marked vertices. Notice that for all 0 k T , the sum Fi(k) +Mi(k) corresponds to the total number of vertices of U with remaining degree i at the end of step k. We also define • bF (k) := P i 0 iFi(k) is the number of available half-edges attached to free vertices at the end of step k, 5Without loss of generality, we assume that the additional extra vertex is always on the V side. • cM(k) := P i 0 iMi(k) is the number of available half-edges attached to marked vertices at the end of step k. We are going to study the evolution of these quantities along with the one of GREEDY. A major ingredient of the proof is to show that Fi(k) and Mi(k) closely follow the solutions of some ODE. This is the so-called differential equation method [Wormald, 1995], stated in Appendix C. For instance, it can easily be seen that bF (k) + cM(k) closely follows the function t 7! µU tµV on (0, µU/µV) in the following sense. Lemma 1. For every " > 0, and for all 0 k T , bF (k) + cM(k) N µU k N µV ⌘ ". with probability at least 1 exp N✏2 2 2U + exp T ✏2 2 2V . We now turn to each individual quantity Fi (resp. Mi). We can prove a similar result, yet the limit function is not explicit (unlike for the matching size as in Theorem 1 statement). The following Lemma 2 states that the discrete sequences of (free and marked) half-edges are closely related to the solutions of some system of differential equations. Before stating it, we first introduce, for any sequence of non-negative numbers (x`)` 0 and (y`)` 0 such that 0 < P ` `(x` + y`) <1, every i 0, the following mappings i(x0, x1, . . . , y0, y1, . . .) := iµVxi + (i+ 1)µVxi+1 h ⇣ P ` 0 `y`P ` 0 `(x`+y`) ⌘ (i+ 1)xi+1 P ` 0 `(x` + y`) (5) and i(x0, x1, . . . , y0, y1, . . .) := iµVyi + (i+ 1)µVyi+1 + h ⇣ P ` 0 `y`P ` 0 `(x`+y`) ⌘ (i+ 1)xi+1 P ` 0 `(x` + y`) , where h is the following function, well-defined on [0, 1], h(s) = 1 V(s) 1 s . Lemma 2. With probability 1 ⇣N exp( ⇠N c/2), there are at most N c quantities Fi and Mi, and for all 0 k T and all i 0 Fi(k) N fi ✓ k N ◆ N 2c and Mi(k) N mi ✓ k N ◆ N 2c, where ⇣, depend only on the (first two) moments of ⇡V and ⇡U and c = 1/20. The continuous mappings fi and mi are solutions of the system of differential equations on [0, µU/µV) dfi dt = i(f0, f1, . . . ,m0,m1, . . .), dmi dt = i(f0, f1, . . . ,m0,m1, . . .), fi(0) = ⇡U (i), mi(0) = 0. (6) This system is well defined as stated by the following Lemma 3. Lemma 3. The system (6) has a unique solution which is well-defined on [0, µU/µV). More precisely, denoting by f and m the generating series of the sequences (fi)i 0 and (mi)i 0, f(t, s) = X i 0 fi(t)s i and m(t, s) = X i 0 mi(t)s i, it holds that: f ✓ µU µV 1 e µV t , s ◆ = U (s 1)e µV t + 1 F (t) , (7) and m ✓ µU µV 1 e µV t , s ◆ = ˆ t 0 F 0(u) 0U (s 1)e µVu + 1 F (u) du. where F is a solution of the following ODE 1 µU 0U (1 F (t)) 1 V ⇣ 1 1µU 0 U (1 F (t)) ⌘F 0(t) = e µV t . 3.3 Aggregating solutions to compute GREEDY performances To get Theorem 1, notice that the number of vertices matched by GREEDY is N minus the number of free vertices remaining at the end, which is approximately equal to Nf(µUµV , 1) by definition of f and because of Lemma 2. This corresponds to t = +1 in Equation (7), thus the performance of GREEDY is, with arbitrarily high probability, arbitrarily close to N(1 U (1 F (+1))) The statement of Theorem 1 just follows from a simple final change of variable. Conclusion We studied theoretical performances of GREEDY algorithm on matching problems with different underlying structures. Those precise results are quite interesting and raise many questions, especially since GREEDY actually outperforms RANKING in many different situations (in theory for 2-regular graphs, but empirical evidence indicates that this happens more generically). Our approach has also successfully been used to unveil some questions on the comparison between different possible models. But more general questions are still open; for instance, assuming that the expected degree is fixed, which situation is the more favorable to GREEDY and online algorithm: small or high variance, or more generally this distribution ⇡U or an alternative one ⇡0U ? The obvious technique would be to compare the solution of the different associated ODE’s. Similarly, the questions of stability/robustness of the solution to variation in the distribution ⇡U and ⇡V are quite challenging and left for future work. We believe online matching will become an important problem for the machine learning community in the future. Each year, the complexity of the underlying graphs increases and we are considering adding features to the model in future work (such as random variables on the edges, modeling the interest for a consumer for a given product), or connection modeled via some Kernel between vertices features (say, if users and products/campaigns are embedded in the same space). In this context, machine learning tools will certainly be needed to tackle the problem. Acknowledgments and Disclosure of Funding V. Perchet acknowledges support from the ANR under grant number #ANR-19-CE23-0026 as well as the support grant as part of the Investissement d’avenir project, reference LabEx Ecodec/ANR11-LABX-0047. Nathan Noiry also acknowledges support from the Telecom Paris DSAIDIS chair and from the ANR ProGraM (ANR-19-CE40-0025). Flore Sentenac is supported by IP PARIS’ PhD Funding.
1. What is the focus of the paper regarding bipartite matching in an online setting? 2. What are the strengths of the proposed approach, particularly in terms of its performance guarantee comparison with the state-of-the-art Ranking algorithm? 3. Do you have any concerns or suggestions regarding the presentation of the experimental results? 4. How could the authors improve the articulation and display of their experimental findings? 5. Are there any minor suggestions for improving the writing, figures, etc., in the paper?
Summary Of The Paper Review
Summary Of The Paper The authors investigate bipartite matching in an online setting were the graph edges are generated via the configuration model (i.e. from a chosen degree distribution). They bound the estimation error, measured via the expected ratio between the online matching obtained by Greedy and the hindsight-optimal matching. This allows them to compare the performance guarantee for Greedy versus the state-of-the-art Ranking algorithm. The authors prove that surprisingly, there exist problem instances (including simple and natural ones) where Greedy outperforms Ranking. Review REVIEW The main result, which is a bound on the estimation error of Greedy for configuration graphs, is of theoretical interest. It is also interesting that this result permits a convenient comparison of Greedy's performance on different configuration graphs. Finally, I agree with the authors that it is interesting that Greedy outperforms Ranking on simple/natural graphs, such as 2-regular graphs. Obtaining these results is nontrivial. I see this not as a breakthrough theoretical result, but rather as an interesting step forward in a relevant area that requires nontrivial analysis. In terms of style, the paper is generally well-written. However, typical NeurIPS papers make better use of subheadings throughout and offer brief exposition/high-level intuitions around each section. This paper is written more like a math journal paper. More structure and more high-level exposition would improve the writeup. The experimental results figures are in general very difficult to parse and have some issues: They are missing labels on the axes; linetypes (dashes/dots) are not used to distinguish the lines; points are not used to show the actual points of comparison; I infer that (for example) the 4 panels in Appendix Fig. 5 and 6 are used to plot d=2,4,10,20 (this should be stated), but there are no titles on the figures, so is the top right plot d=4 or the bottom left plot d=4? Also, in the figures shown in the paper's text (Fig. 1), we are shown results for d=[2,3,4,6,10], and we are told results are nearly instantaneous. Given that computation is not a limiting factor, why not show us more results (in a different type of plot)? In general, for a NeurIPS paper I would expect a greater effort to articulate and display experimental results---improving this will improve the paper. Also, one of the last sentences of the paper mentions that you expect Greedy will outperform Ranking in various other scenarios---this is a tantalizing opportunity for a broader experimental approach (either for this or another paper). I will consider revising my review based on the authors' response. Minor suggestions for improving the writing, figures, etc.: References are all lacking numbers in the References section. This makes it very difficult to check your references. -Line 69 states "The main is that " --> maybe there is a missing word such as 'main limitation'? -Convention suggests that your introduction should have subsections. It is a well-written introduction, but we only learn about your main result in the penultimate paragraph. You might consider highlighting it under a "Main contribution" substitle or bold paragraph heading. The Algorithm \ref in line 129 is undefined. Multiple instances of the word GREEDY have a missing space after. If this is a macro, try a \ (slash) between GREEDY and any non-punctuation letters/words. Missing 's' in 'situations' on line 308 Line 212: folline -> offline
NIPS
Title Online Matching in Sparse Random Graphs: Non-Asymptotic Performances of Greedy Algorithm Abstract Motivated by sequential budgeted allocation problems, we investigate online matching problems where connections between vertices are not i.i.d., but they have fixed degree distributions – the so-called configuration model. We estimate the competitive ratio of the simplest algorithm, GREEDY, by approximating some relevant stochastic discrete processes by their continuous counterparts, which are solutions of an explicit system of partial differential equations. This technique gives precise bounds on the estimation errors, with arbitrarily high probability as the problem size increases. In particular, it allows the formal comparison between different configuration models. We also prove that, quite surprisingly, GREEDYcan have better performance guarantees than RANKING, another celebrated algorithm for online matching that usually outperforms the former. 1 Introduction Finding matchings in bipartite graphs (U[V , E), where E ⇢ U⇥V is a set of edges, is a long-standing problem with different motivations and approaches [Godsil, 1981, Zdeborová and Mézard, 2006, Lovász and Plummer, 2009, Bordenave et al., 2013]. If U is seen as a set of resources and V as demands, the objective is to allocate as many resources to demands (an allocation - or a matching - between u and v is admissible if (u, v) 2 E) with the constraint that a resource is allocated to only one demand and vice-versa. Motivated particularly by practical applications to Internet advertising, the online variant of this problem is receiving increasing attention (we refer to the excellent survey [Mehta, 2012] for more applications, specific settings, results and techniques). In this case, the set of vertices U is present at the beginning and the graph unveils sequentially: vertices v 2 V are observed sequentially, one after the other, along with the edges they belong to. An online algorithm must decide, right after observing vk and its associated set of edges Ek := {(u, vk) 2 E} to match it to some other vertex u 2 U , at the conditions that (u, vk) 2 Ek and u 2 U has not been matched yet. The performance of an online algorithm is evaluated by its competitive ratio, which is the ratio between the size of the matching it has created and the highest possible matching in hindsight [Feldman et al., 2009]. This theoretical setting is particularly well suited for online advertising: U is the set of campaigns/ads that an advertiser can run and users v1, v2, . . . , vT arrive sequentially [Mehta, 2012, Manshadi et al., 2012]. Some of them are eligible for a large subset of campaigns, others are not (usually based 35th Conference on Neural Information Processing Systems (NeurIPS 2021). on their attributes/features, such as the geographic localization, the browsing history, or any other relevant information). The objective of an advertiser (in this over-simplified model) is to maximize the number of displayed ads. In practice, campaigns/ads are not displayed only once but have a maximal budget of impressions (say, a specific ad can be displayed only 10.000 times each day). A possible trick consists of duplicating the vertices of U as many times as the budget. However, this results in strong and undesirable correlations between vertices. It is, therefore, more appropriate to consider a bipartite graph with capacities and admissible matchings as subsets of edges such that each vertex belongs to several different edges, but not more than their associated capacities ! 2 N (a vertex v 2 V is matched once while u 2 U can be matched !u times). This online matching problem with capacities has been quite extensively studied. It is known that GREEDY, which matches all incoming vertices to any available neighbor has a competitive ratio of 1/2 in the worst case, albeit it achieves 1 1/e as soon as the incoming vertices arrive in Random Order [Goel and Mehta, 2008b]. The worst-case optimal algorithm is the celebrated RANKING, which achieves 1 1/e on any instance [Karp et al., 1990, Devanur et al., 2013, Birnbaum and Mathieu, 2008], and also has better guarantees in the Random Order setting [Mahdian and Yan, 2011]. Beyond the adversarial setting, the following stochastic setting has been considered: there exist a finite set of L “base” vertices v(1), . . . , v(L) associated to base edge-sets E(1), . . . , E(L). When a vertex vk arrives, its type ✓k 2 {1, . . . , L} is drawn iid from some distribution (either known beforehand or not) and then its edge set is set as Ek = E(✓k). In the context where the distribution is known, algorithms with much better competitive ratios than GREEDY or RANKING were designed [Manshadi et al., 2012, Jaillet and Lu, 2014, Brubach et al., 2019], specifically with a competitive ratio of 1 2/e2 when the expected number of arrival of each type are integral and 0.706 without this assumption. Notably, those competitive ratios still hold with Poisson arrival rates rather than a fixed number of arrivals. On a side note, a vast line of work considers online matching in weighted graphs [Devanur et al., 2012, Goel and Mehta, 2008a, Mehta, 2012], which is outside the scope of this paper. However, it is still worth noting that the unweighted graph is a weighted graph with all weights equal. This model of the stochastic setting is quite interesting but rather strong: it lacks flexibility and cannot be used to represent some challenging instances ( for example when the degrees of each vertex U increase linearly with the number of vertices in V , or when the set U of campaigns must be fixed so that the model is well specified, etc...). Another tentative is to consider Erdős-Rényi graphs assuming that each possible edge is present in U ⇥V with some fixed probability and independently of the other edges (see [Mastin and Jaillet, 2013]). The most interesting and challenging setting corresponds to the so-called sparse regime where each vertex of U has an expected degree independent of the size n of V , which amounts to take a probability of connection equal to c/n. Interestingly enough, even the analysis of the simplest GREEDY algorithm is quite challenging and already insightful in those models [Borodin et al., 2018, Arnosti, 2019, Dyer et al., 1993, Mastin and Jaillet, 2013]. Unfortunately, although this Erdős-Rényi model is compatible with growing sets U and V , it also turns out to be quite restrictive. The main reason is that the approximate Poisson degree distribution of the vertices has light-tail and does not allow for the appearance of the so-called scale-free property satisfied by many real-world networks [Barabási et al., 2000, Van Der Hofstad, 2016]. We, therefore, consider a more appropriate random graphs generation process called configuration model, introduced by [Bender and Canfield, 1978] and [Bollobás, 1980]. The optimal matching of this model has been computed in [Bordenave et al., 2013]. The configuration model is particularly well suited to handle different situations such as the following one. Assume that campaigns can either be “intensive” (with many eligible users) or “selective/light” (few eligible users), with an empirical proportion of, say, 20%/80%. Then whether an advertiser handles 100 campaigns at the same time or 10.000, it will always have roughly this proportion of intensive vs. light campaigns. Similarly, some users are more valuable than others, and are thus eligible for more campaigns than the others; the proportion of each type being independent of the total population size. The configuration model accommodates these observations by basically drawing iid degrees for vertices U and V (accordingly to some different unknown distributions for U and V) and then by finding a graph such that those degrees distribution are satisfied (up to negligible errors); as a consequence, the graphs generated are sparse, in the sense that the number of edges grows linearly with the number of vertices. Additionally, the configuration model is a well-suited random graph model which mimics a number of properties of real-world complex networks, while being analytically tractable. For instance, choosing power-law distributions for the degrees allows to obtain the so-called scale-free property (often observed in practice, as highlighted for the web by Faloutsos et al. [1999]). The configuration model also displays the so called “small-world phenomenon” (observed for instance in the graph of Facebook by Backstrom et al. [2012]) as its diameter is of logarithmic order. Main contribution We investigate the performances (in terms of expected competitive ratio) of the GREEDY matching algorithm in configuration models and we provide explicit quantitative results using stochastic approximation techniques [Wormald, 1995]; we prove that the increasing size of the random matching created is arbitrarily close to the solution of some explicit ODE. Solving the latter then gives in turn the solution to the original problem. The remaining of the paper is organized as follows. Section 2 describes precisely the problem and Theorem 1 is our first main result: it describes the performances of GREEDY in the capacity-less problem. The proof of Theorem 1 is delayed to Appendix D, but the main ideas and intuitions are provided in Section 3. The online matching with capacities problem is treated in Appendix A. 2 Online Matching Problems; Models and main result Consider a bipartite graph with capacities G = (U ,V, E ,!) where U = {1, . . . , N} and V = {1, . . . , T} are two finite set of vertices, E ⇢ (u, v), u 2 U , v 2 V is the set of edges and ! : U ! N⇤ is a capacity function. A matching M on G is a subset of edges e 2 E such that any vertex v 2 V is the endpoint of at most one edge e 2 M and any vertex u 2 U is the endpoint of at most !u edges in M . We will denote by M the set of matchings on G; the optimal matching M⇤ 2M is the one (or any one) with the highest cardinality, denoted by |M⇤|. The batched matching problem consists in finding any optimal matching M⇤ given a graph with capacities G; the online variant might be a bit more challenging, as the matching is constructed sequentially. Formally, the set of vertices U and their capacities ! are known from the start, and vertices v 2 V arrive sequentially (with the edges they belong to) and M0 = ;. At stage t 2 N – assuming a matching Mt 1 has been constructed –, a decision maker observes a new vertex1 vt and its associated set of edges {(u, vt);u 2 E}. If possible, one of these edges (ut, vt) is added to Mt 1, with the constraint that Mt = Mt 1 [ {(ut, vt)} is still a matching. The objective is to maximize the size of the constructed matching MT . The classical way to evaluate the performances of an algorithm is the competitive ratio, defined as |MT |/|M⇤| 2 [0, 1] (the higher the better). 2.1 Structured online matching via Configuration Model As mentioned before, the online matching problem can be quite difficult without additional structure. We will therefore assume that the vertex degrees in U and V have (at least asymptotically in N and T ) some given subGaussian2 distributions ⇡U and ⇡V , of respective expectation µU and µV and respective proxy-variance 2U and 2 V . Those numbers are related in the sense that we assume 3 that T = µUµV N 2 N. Given those degree distributions, the graphs we consider are random draws from a bipartite configuration model described below; for the sake of clarity, we first consider the capacity-less case (when !u = 1 for all u 2 U). Given ⇡U and ⇡V and N,T 1, let dU1 , . . . , dUN 2 N i.i.d. ⇠ ⇡U and dV1 , . . . , dVT 2 N i.i.d. ⇠ ⇡V be independent random variables; intuitively, those numbers are respectively the number of half-edges attached to vertex in U and V . Consider also two extra random variables dVT+1 = max NX i=1 dUi TX j=1 dVj , 0 and dUN+1 = max TX j=1 dVj NX i=1 dUi , 0 1Although the order of arrival is irrelevant to the models we studied, it could have an impact on other models. 2X is subGaussian with proxy-variance 2 if for any s 2 R,E[exp(sX)] exp ⇣ 2s2 2 ⌘ . Actually, we only need that ⇡U and ⇡V have some finite moment of order > 2. 3In the general case, consider T = bNµU/µVc. The proof is identical, up to a negligible 1/N error term so that equality between total degrees holds, i.e., PN+1 i=1 d U i = PT+1 j=1 d V j . Finally, a random (capacity- less) bipartite graph denoted by CM(dU ,dV ) is constructed with a uniform pairing of half-edges of U [ {N + 1} with half-edges of V [ {T + 1} and removing vertices T + 1 and N + 1 and their associated edges. These two artificially added vertices are just here to define a pairing between half-edges. Notice that, by the law of large numbers and since T = (µU/µV)N , dVT+1 = o(N) and dUN+1 = o(N) almost surely 4. The bipartite configuration model CM(dU ,dV) is then the random graph obtained by a uniform matching between the half-edges of U and the half-edges of V , where the random sequences dU = (dUi )i and dV = (dVj )j are defined as above. 2.2 Competitive ratio of GREEDY algorithm. Main result The first question to investigate in this structured setting is the computation of the (expected) competitive ratio of the simple algorithm GREEDY. It constructs a matching by sequentially adding any admissible edge uniformly at random. Describing it and stating our results require the following additional notations: for any e = (u, v) 2 E, u(e) = u (resp. v(e) = v) is the extremity of e in U (resp. V); the generating series of ⇡U and ⇡V are denoted by U and V and are defined as U (s) := X k 0 ⇡U (k)s k and V(s) := X k 0 ⇡V(k)s k. Our first main theorem, stated below, identifies the asymptotic size of the matching generated by GREEDY on the bipartite configuration model we have just defined. As the batched problem (i.e., computing the size of the optimal matching M⇤) is well understood [Bordenave et al., 2013], this quantity is sufficient to derive competitive ratios. Again, for the sake of presentation, we first assume that all capacities are fixed, equal to one; the general case is presented in Appendix A. Theorem 1. (Performances of GREEDY in the capacity-less case) Given N 1 and T = µUµV N , let MT be the matching built by GREEDY on CM(d U ,dV ) then the following convergence in probability holds: |MT | N P ! N!+1 1 U (1 G(1)). where G is the unique solution of the following ordinary differential equation: G0(s) = 1 V ⇣ 1 1µU 0 U (1 G(s)) ⌘ µV µU 0U (1 G(s)) ; G(0) = 0. (1) Moreover, for any s 2 [0, 1], if MT (s) is the matching obtained by GREEDY after seeing a proportion s of vertices of V , then |MT (s)| N P ! N!+1 1 U (1 G(s)). (2) Convergence rates are explicit; with probability exponentially large, at least 1 ⇣N exp( ⇠N c/2), sup s2[0,1] |MT (s)| N 1 U (1 G(s)) N c, where ⇣, ⇠, depend only on the (first two) moments of both ⇡V and ⇡U , and c is some universal constant (set arbitrarily as 1/20 in the proof). Theorem 1 generalizes to the case with capacities, see Sections A.1 and A.2. The details of the proof of Theorem 1 are postponed to Appendix D, but the main ideas are given in Section 3. 2.3 Examples, Instantiations and Corollaries We provide in this section some interesting examples and corollaries that illustrate the powerfulness of Theorem 1, and how it can be used to compare different situations. 4And even O( p N) with probability exponentially large in N as both distributions are sub-Gaussian. So the effects of those additional vertices can be neglected. 2.3.1 d-regular graphs The first typical example of random graphs are “ d-regular ”, for some d 2 N, i.e., graphs such that each vertex has an exact degree of d (to avoid trivial examples, we obviously assume d 2). It is non-trivial to sample a d-regular graph at random, yet it is easy to generate random graphs GN with the configuration model described above, with the specific choices of ⇡U = ⇡V = d, the Dirac mass at d. The downside is that GN is not exactly a d-regular bipartite random graph (as some vertices might be connected more than once, i.e., there might exist parallel edges). However, conditioned to be simple, i.e, without multiple edges and loops, it has the law of a uniform d-regular bipartite random graph. Moreover, the probability of being simple is bounded away from 0 [Van Der Hofstad, 2016]; as a consequence, any property holding with probability tending to 1 for GN , holds with probability tending to 1 for uniform d-regular bipartite random graphs. Finally, we also mention that Hall’s Theorem [Frieze and Karoński, 2016] implies that GN admits a perfect matching, so that |M⇤| = N . Instantiating Equation (1) to d-regular graphs yields that the competitive ratio of GREEDY converges, with probability 1, to 1 (1 G(1))d where G is the solution of the following ODE (1 G(s))d 1 1 (1 (1 G(s))d 1)d G0(s) = 1 d . (3) As expected, had we taken d = 1, then G(s) = s hence the competitive ratio of GREEDY is 1 (but again, d = 1-regular graphs are trivial). More interestingly, if d = 2, the ODE has a closed form solution: G(s) = exp( s2 ) 1, so that the competitive ratio of GREEDY converges to 4 p e (e+ 3) ' 0.877 1 1e ' 0.632, where the latter is a standard bound of the competitive ratio of GREEDY (for general, non-regular graphs) [Mehta, 2012]. Solving Equation (3) In the general case d 3, even if Equation (3) does not have a closed form solution, it is still possible to provide some insights. Notice first that the polynomial P (X) = 1 (1 (1 X)d 1)d admits n := d(d 1) roots, among which there is 1 with multiplicity d 1. If X is another root, then 1 (1 X)d 1 d = 1 , 1 (1 X)d 1 = e ik⇡ d , k = 1, . . . , d 1. Therefore, (1 X)d 1 = 1 e ik⇡ d , which admits d 1 distinct solutions for each k = 1, . . . , d 1. The resulting n := (d 1)2 distinct complex, denoted x1, . . . , xn, are the roots of P (X)/(1 X)d 1, so the ODE reduces to: y0(t)Q 1in y(t) xi = 1 d . (4) Since the following trivially holds: 1Q 1in(X xi) = X 1in 1Q j 6=i(xi xj) 1 X xi =: X 1in ai X xi . it is possible to integrate Equation (4) in P 1in ai log(y(t) xi) = s d + c to finally get Y 1in (y(t) xi) ai = C exp( s d ), and since y(0) = 0, it must hold that C = Q 1in( xi) ai . As a consequence, y(1) solves: Y 1in (y(1) xi) ai = e1/d Y 1in ( xi) ai . Unfortunately, even for d = 3, the solution somehow simplifies but has no closed form; on the other hand, numerical computations indicate that the competitive ratio of GREEDY converges to 0.89 when d = 3 and N tends to infinity. We provide in Figure 3 the numerical solutions of the ODE for d-regular graphs (actually, we draw the functions 1 U (1 G(s)) that are more relevant) for various values of d; the end-point obtained at s = 1 indicates the relative performance of GREEDY. As expected, those functions are point-wise increasing with d (as the problem becomes simpler and simpler for GREEDY when d 2). 2.3.2 The Erdős-Rényi case. In a Erdős-Rényi graph, there is an edge between two vertices u 2 U and v 2 V with some probability p = cN , independently from each others. As N goes to infinity, the number of edges to a vertex follows (approximately) a Poisson law of parameter c > 1. As a consequence, we consider the configuration model where ⇡U and ⇡V are Poisson laws of parameter c, which yields µ = c, U (s) = ec(s 1). In this case, Equation (1) becomes: cG0(s) e cG(s) 1 e c e cG(s) = 1. The solutions are given by: G(s) = 1 c log ✓ c log(ek cs +1) ◆ , yielding X(1 G(s)) = 1 c log ek cs +1 . The initial condition U (1 G(0)) = U (1) = 1 gives ek = ec 1, from which we deduce that the number of matches of GREEDY is asymptotically proportional to 1 U (1 G(1)) = 1 log (2 e c) c , which recovers, as a sanity check, some existing results [Mastin and Jaillet, 2013]. 2.3.3 The comparison of different configuration models Using Gronwall’s Lemma, it is possible to show Theorem 1 can be used to compare different configuration models, as in the following Corollary. Corollary 1. Consider two configuration models CM1(dU1 ,d V 1 ) and CM2(d U 2 ,d V 2 ), s.t. d U 1 and dU1 are both drawn i.i.d. from ⇡U , d V 1 is drawn i.i.d. from ⇡ 1 V and d V 2 is drawn i.i.d. from ⇡2V , with P x x⇡ 1 V (x) = P x x⇡ 2 V (x). If 1 V (s) 2 V (s) for any s 2 (0, 1), then by denoting respectively 1 and 2 the asymptotic proportion of vertices matched by GREEDY in CM1(dU1 ,d V 1 ) and CM2(dU2 ,d V 2 ), it holds that necessarily 2 1. For instance, let us assume that the degree distribution on the offline side is fixed. Then the matching size obtained by GREEDY is asymptotically larger if vertices on the online side all have exactly the same degree d rather than if those degrees are drawn from a Poisson distribution with expectation d. A similar result (with a different criterion) holds with fixed degree distribution on the online side and differing one on the offline side. 2.4 GREEDY can outperform RANKING ! We recall that the RANKING algorithm, which is the worse case optimal, chooses at random a ranking over U and uses it to break ties (i.e., if two vertices u and u0 can be matched to vk, then it is the one with the smallest rank that is matched by RANKING). Quite surprisingly, we get that in the configuration model RANKING can have a worse competitive ratio than GREEDY, which advocates again for its thorough study. Proposition 1. Let R and G be the assymptotic performances of RANKING and GREEDY on the 2-regular graph. The following holds: G > R. In other words, GREEDY outperforms RANKING in the 2-regular graph. We conjecture that the above result actually holds for any d 2, and more generally for a wide class of distributions ⇡U and ⇡V (finding a general criterion would be very interesting). The proof of Proposition 1 is provided in Appendix G. The main idea is that in the 2-regular graph, RANKING is biased towards selecting as matches vertices with two remaining half-edges rather than just one. Indeed, vertices with only one remaining half-edge were not selected previously and thus have a higher rank. The vertices with only one remaining half-edge will not get matched in the subsequent iterations, so not picking them as matches is suboptimal. On the other hand, GREEDY picks any match uniformly at random and does not exhibit such bias. 3 Ideas of proof of Theorem 1 The main idea behind the proof of Theorem 1 (postponed to Section D) is to show that the random deterministic evolution of the matching size generated by GREEDY is closely related to the solution of some ODE (this is sometimes called “the differential equation method” [Wormald, 1995] or “stochastic approximations” [Robbins and Monro, 1951]). Computing the solution of the ODE is easier - if not explicitly, at least numerically in intricate cases - than estimating the performances of GREEDY by Monte-Carlo simulations and it provides qualitative, as well as quantitative, properties. Tracking the matching size is non-trivial because the vertices (in U and V) have different degrees, hence some of them are more likely to be matched than others. However, in the configuration model, each vertex has the same distribution of degrees before the sequences dU and dV are fixed. As a consequence, the proof relies on the three following techniques 1. The graph is built sequentially, along with the matching and not beforehand (fixing the ”randomness” at the beginning would be very difficult to handle in the analysis). Thankfully, this does not change the law of the graph generated (this is obviously crucial). 2. We are not only going to track the size of the matching built as we need to handle different probabilities of matching (and pairing the graph) for each vertex. As a consequence, we are going to track the numbers of non-matched vertices which have still i half-edges to be paired and the number of already matched vertices that have j half-edges remaining. This will give one different ODE per value of i of j. Since ⇡U and ⇡V are sub-Gaussian, we will prove that with arbitrarily high probability - exponential in N -, there are only a polynomial number of such equations 3. All those differential equations are then “aggregated” to build the final ODE satisfied by the matching size. Interestingly, this aggregated ODE has a simple form, while the full system is on the other hand quite intricate. In the following sub-sections, we separate the proofs in the different building blocks to provide intuitions; the proof of technical lemmas are deferred to the appendix. 3.1 Building the graph together with the matching The first step in the analysis is to notice that the bipartite configuration model can be constructed by sequentially pairing the half-edges coming from V . The matching generated by GREEDY is then constructed simultaneously with the graph. More precisely, given two sequences5 of nonnegative integers dU = (dU1 , . . . , dUN ) and d V [ {dVT+1} = (d V 1 , . . . , d V T , d V T+1), we introduce in the following a generating algorithm that simultaneously build the associated bipartite configuration model CM(dU ,dV) together with GREEDY. Recall that the bipartite configuration model is obtained through a uniform matching between the half-edges of U and the half-edges of V . To avoid confusion, we will call a marked matching a pairing of two half-edges that corresponds to an edge that will belong to the constructed matching M. This construction pseudo-code is detailed in Algorithm 1. Algorithm 1: GREEDY MATCHING CONFIGURATION MODEL WITHOUT CAPACITIES Input: dU = (dU1 , . . . , d U N ) and d V = (dV1 , . . . , d V T ) Initialization. M0 ;, E0 ; and HU0 { half-edges of U} for t = 1, . . . , T do Order uniformly at random the edges emanating from vt: et1, . . . , etkt for i = 1, . . . , kt do Choose uniformly an half-edge eUi in HU E E [ {u(eUi ), vt} // Create an edge between e t i and e U i HU HU \ {eUi } // Remove the half-edge if vt and u(eUi ) unmatched then Mt Mt 1 [ {u(eUi ), vt} // vt is matched end end end CM(dU ,dV) (U ,V, E). Output: Bipartite configuration model CM(dU ,dV) and matching MT on it. Since each pairing of each half-edge is done uniformly at random, the graph obtained at the end of the algorithm has indeed the law of a bipartite configuration model. Moreover, it is easy to see that M corresponds to the matching constructed by GREEDY MATCHING on CM(dU ,dV). 3.2 Differential Equation Method - Stochastic Approximation As mentioned above, several quantities are going to be tracked through time: for all k 2 {0, . . . , T} and all i 0, we define: • Fi(k) as the number of vertices u 2 U that are not yet matched at the end of step k and whose remaining degree is i, meaning that du i of their initial half-edges have been paired. We will refer them to as free vertices. • Mi(k) as the number of vertices u 2 U already matched at the end of step k and whose remaining degree is i. We will refer them to as marked vertices. Notice that for all 0 k T , the sum Fi(k) +Mi(k) corresponds to the total number of vertices of U with remaining degree i at the end of step k. We also define • bF (k) := P i 0 iFi(k) is the number of available half-edges attached to free vertices at the end of step k, 5Without loss of generality, we assume that the additional extra vertex is always on the V side. • cM(k) := P i 0 iMi(k) is the number of available half-edges attached to marked vertices at the end of step k. We are going to study the evolution of these quantities along with the one of GREEDY. A major ingredient of the proof is to show that Fi(k) and Mi(k) closely follow the solutions of some ODE. This is the so-called differential equation method [Wormald, 1995], stated in Appendix C. For instance, it can easily be seen that bF (k) + cM(k) closely follows the function t 7! µU tµV on (0, µU/µV) in the following sense. Lemma 1. For every " > 0, and for all 0 k T , bF (k) + cM(k) N µU k N µV ⌘ ". with probability at least 1 exp N✏2 2 2U + exp T ✏2 2 2V . We now turn to each individual quantity Fi (resp. Mi). We can prove a similar result, yet the limit function is not explicit (unlike for the matching size as in Theorem 1 statement). The following Lemma 2 states that the discrete sequences of (free and marked) half-edges are closely related to the solutions of some system of differential equations. Before stating it, we first introduce, for any sequence of non-negative numbers (x`)` 0 and (y`)` 0 such that 0 < P ` `(x` + y`) <1, every i 0, the following mappings i(x0, x1, . . . , y0, y1, . . .) := iµVxi + (i+ 1)µVxi+1 h ⇣ P ` 0 `y`P ` 0 `(x`+y`) ⌘ (i+ 1)xi+1 P ` 0 `(x` + y`) (5) and i(x0, x1, . . . , y0, y1, . . .) := iµVyi + (i+ 1)µVyi+1 + h ⇣ P ` 0 `y`P ` 0 `(x`+y`) ⌘ (i+ 1)xi+1 P ` 0 `(x` + y`) , where h is the following function, well-defined on [0, 1], h(s) = 1 V(s) 1 s . Lemma 2. With probability 1 ⇣N exp( ⇠N c/2), there are at most N c quantities Fi and Mi, and for all 0 k T and all i 0 Fi(k) N fi ✓ k N ◆ N 2c and Mi(k) N mi ✓ k N ◆ N 2c, where ⇣, depend only on the (first two) moments of ⇡V and ⇡U and c = 1/20. The continuous mappings fi and mi are solutions of the system of differential equations on [0, µU/µV) dfi dt = i(f0, f1, . . . ,m0,m1, . . .), dmi dt = i(f0, f1, . . . ,m0,m1, . . .), fi(0) = ⇡U (i), mi(0) = 0. (6) This system is well defined as stated by the following Lemma 3. Lemma 3. The system (6) has a unique solution which is well-defined on [0, µU/µV). More precisely, denoting by f and m the generating series of the sequences (fi)i 0 and (mi)i 0, f(t, s) = X i 0 fi(t)s i and m(t, s) = X i 0 mi(t)s i, it holds that: f ✓ µU µV 1 e µV t , s ◆ = U (s 1)e µV t + 1 F (t) , (7) and m ✓ µU µV 1 e µV t , s ◆ = ˆ t 0 F 0(u) 0U (s 1)e µVu + 1 F (u) du. where F is a solution of the following ODE 1 µU 0U (1 F (t)) 1 V ⇣ 1 1µU 0 U (1 F (t)) ⌘F 0(t) = e µV t . 3.3 Aggregating solutions to compute GREEDY performances To get Theorem 1, notice that the number of vertices matched by GREEDY is N minus the number of free vertices remaining at the end, which is approximately equal to Nf(µUµV , 1) by definition of f and because of Lemma 2. This corresponds to t = +1 in Equation (7), thus the performance of GREEDY is, with arbitrarily high probability, arbitrarily close to N(1 U (1 F (+1))) The statement of Theorem 1 just follows from a simple final change of variable. Conclusion We studied theoretical performances of GREEDY algorithm on matching problems with different underlying structures. Those precise results are quite interesting and raise many questions, especially since GREEDY actually outperforms RANKING in many different situations (in theory for 2-regular graphs, but empirical evidence indicates that this happens more generically). Our approach has also successfully been used to unveil some questions on the comparison between different possible models. But more general questions are still open; for instance, assuming that the expected degree is fixed, which situation is the more favorable to GREEDY and online algorithm: small or high variance, or more generally this distribution ⇡U or an alternative one ⇡0U ? The obvious technique would be to compare the solution of the different associated ODE’s. Similarly, the questions of stability/robustness of the solution to variation in the distribution ⇡U and ⇡V are quite challenging and left for future work. We believe online matching will become an important problem for the machine learning community in the future. Each year, the complexity of the underlying graphs increases and we are considering adding features to the model in future work (such as random variables on the edges, modeling the interest for a consumer for a given product), or connection modeled via some Kernel between vertices features (say, if users and products/campaigns are embedded in the same space). In this context, machine learning tools will certainly be needed to tackle the problem. Acknowledgments and Disclosure of Funding V. Perchet acknowledges support from the ANR under grant number #ANR-19-CE23-0026 as well as the support grant as part of the Investissement d’avenir project, reference LabEx Ecodec/ANR11-LABX-0047. Nathan Noiry also acknowledges support from the Telecom Paris DSAIDIS chair and from the ANR ProGraM (ANR-19-CE40-0025). Flore Sentenac is supported by IP PARIS’ PhD Funding.
1. What is the focus of the paper regarding online matching on random bipartite graphs? 2. What are the strengths of the proposed approach, particularly in terms of the differential equation method? 3. What are the weaknesses of the paper, especially in the experimental section? 4. Do you have any concerns about the competitive ratios of the GREEDY algorithm in different graph models? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper analyzes the performance of the GREEDY algorithm for online matching on random bipartite graphs generated by the configuration model with given degree distributions. The analysis is based the differential equation method, a stochastic approximation using an ODE to represent the time evolution of matching size under the GREEDY algorithm with overwhelming probability. The paper reports the following main results: A proof that the matching size of the GREEDY algorithm is the solution to an ODE. Competitive ratios of the GREEDY algorithm in different graph models (d-regular graphs, Erdos-Reny graphs). A proof that GREEDY beats RAKNING on 2-regular graph, with experimental evidence suggesting that the same holds for higher degrees. Review Very readable, great writing, though typos are noticeable. Main idea is to consider the random process of graph generation in the configuration model together with the time evolution of GREEDY matching, which is interesting. The main result generalizes or complements some known results on special graph families. The paper is easy to follow. Some nitpicks: Typo: line 19, Page 1, second paragraph of Introduction, Internal advertising should be Internet advertising. Typo: Algorithm ??, Page 4, line 129. Great introduction on online matching and its variants (weighted, configuration model).
NIPS
Title A New Perspective on Pool-Based Active Classification and False-Discovery Control Abstract In many scientific settings there is a need for adaptive experimental design to guide the process of identifying regions of the search space that contain as many true positives as possible subject to a low rate of false discoveries (i.e. false alarms). Such regions of the search space could differ drastically from a predicted set that minimizes 0/1 error and accurate identification could require very different sampling strategies. Like active learning for binary classification, this experimental design cannot be optimally chosen a priori, but rather the data must be taken sequentially and adaptively. However, unlike classification with 0/1 error, collecting data adaptively to find a set with high true positive rate and low false discovery rate (FDR) is not as well understood. In this paper we provide the first provably sample efficient adaptive algorithm for this problem. Along the way we highlight connections between classification, combinatorial bandits, and FDR control making contributions to each. 1 Introduction As machine learning has become ubiquitous in the biological, chemical, and material sciences, it has become irresistible to use these techniques not only for making inferences about previously collected data, but also for guiding the data collection process, closing the loop on inference and data collection [10, 38, 41, 39, 33, 31]. However, though collecting data randomly or non-adaptively can be inefficient, ill-informed ways of collecting data adaptively can be catastrophic: a procedure could collect some data, adopt an incorrect belief, collect more data based on this belief, and leave the practitioner with insufficient data in the right places to infer anything with confidence. In a recent high-throughput protein synthesis experiment [33], thousands of short amino acid sequences (length less than 60) were evaluated with the goal of identifying and characterizing a subset of the pool of all possible sequences ( ≈ 1080) containing many sequences that will fold into stable proteins. That is, given an evaluation budget that is just a minuscule proportion of the total number of sequences, the researchers sought to make predictions about individual sequences that would never be evaluated. An initial first round of sequences uniformly sampled from a predefined subset were synthesized to observe whether each sequence was in the set of sequences that will fold,H1, or inH0 = Hc1. Treating this as a classification problem, a linear logistic regression classifier was trained, using these labels and physics based features. Then a set of sequences to test in the next round were chosen to maximize the probability of folding according to this empirical model - a procedure repeated twice more. This strategy suffers two flaws. First, selecting a set to maximize the likelihood of hits given past rounds’ data is effectively using logistic regression to perform optimization similar to follow-the-leader strategies [14]. While more of the sequences evaluated may fold, these observations may provide little information about whether sequences that were not evaluated will fold or not. Second, while it is natural to employ logistic regression or the SVM 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. to discriminate between binary outcomes (e.g., fold/not-fold), in many scientific applications the property of interest is incredibly rare and an optimal classifier will just predict a single class e.g. not fold. This is not only an undesirable inference for prediction, but a useless signal for collecting data to identify those regions with higher, but still unlikely, probabilities of folding. Consider the data of [33] reproduced in Figure 1, where the proportion of sequences that fold along with their distributions for a particularly informative feature (Buried NPSA) are shown in each round for two different protein topologies (notated βαββ and ααα). In the last column of Figure 1, even though most of the sequences evaluated are likely to fold, we are sampling in a small part of the overall search space. This limits our overall ability to identify under-explored regions that could potentially contain many sequences that fold, even though the logistic model does not achieve its maximum there. On the other hand, in the top plot of Figure 1, sequences with topology βαββ (shown in blue) so rarely folded that a near-optimal classifier would predict “not fold” for every sequence. Instead of using a procedure that seeks to maximize the probability of folding or classifying sequences as fold or not-fold, a more natural objective is to predict a set of sequences π in such a way as to maximize the true positive rate (TPR) |H1 ∩π|/|H1| while minimizing the false discovery rate (FDR) i.e. |H0 ∩ π|/|π|. That is, π is chosen to contain a large number of sequences that fold while the proportion of false-alarms among those predicted is relatively small. For example, if a set π for βαββ was found that maximized TPR subject to FDR being less than 9/10 then π would be non-empty with the guarantee that at least one in every 10 suggestions was a true-positive; not ideal, but making the best of a bad situation. In some settings, such as for topology ααα (shown in orange), training a classifier to minimize 0/1 loss may be reasonable. Of course, before seeing any data we would not know whether classification is a good objective so it is far more conservative to optimize for maximizing the number of discoveries. Contributions. We propose the first provably sample-efficient adaptive sampling algorithm for maximizing TPR subject to an FDR constraint. This problem has deep connections to active binary classification (e.g., active learning) and pure-exploration for combinatorial bandits that are necessary steps towards motivating our algorithm. We make the following contributions: 1. We improve upon state of the art sample complexity for pool-based active classification in the agnostic setting providing novel sample complexity bounds that do not depend on the disagreementcoefficient for sampling with or without replacement. Our bounds are more granular than previous results as they describe the contribution of a single example to the overall sample complexity. 2. We highlight an important connection between active classification and combinatorial bandits. Our results follow directly from our improvements to the state of the art in combinatorial bandits, extending methods to be near-optimal for classes that go beyond matroids where one need not sample every arm at least once. 3. Our main contribution is the development and analysis of an adaptive sampling algorithm that minimizes the number of samples to identify the set that maximizes the true positive rate subject to a false discovery constraint. To the best of our knowledge, this is the first work to demonstrate a sample complexity for this problem that is provably better than non-adaptive sampling. 1.1 Pool Based Classification and FDR Control Here we describe what is known as the pool-based setting for active learning with stochastic labels. Throughout the following we assume access to a finite set of items [n] = {1, · · · , n} with an associated label space {0, 1}. The items can be fixed vectors {xi}ni=1 ∈ Rd but we do not restrict to this case. Associated to each i ∈ [n] there is a Bernoulli distribution Ber(ηi) with ηi ∈ [0, 1]. We imagine a setting where in each round a player chooses It ∈ [n] and observes an independent random variable YIt,t. For any i, Yi,t ∼ Ber(ηi) are i.i.d. Borrowing from the multi-armed bandit literature, we may also refer to the items as arms, and pulling an arm is receiving a sample from its corresponding label distribution. We will refer to this level of generality as the stochastic noise setting. The case when ηi ∈ {0, 1}, i.e. each point i ∈ [n] has a deterministic label Yi,j = ηi for all j ≥ 1, will be referred to as the persistent noise setting. In this setting we can define H1 = {i : ηi = 1},H0 = [n] \ H1. This is a natural setting if the experimental noise is negligible so that performing the same measurement multiple times gives the same result. A classifier is a decision rule f : [n]→ {0, 1} that assigns each item i ∈ [n] a fixed label. We can identify any such decision rule with the set of items it maps to 1, i.e. the set π = {i : i ∈ [n], f(i) = 1}. Instead of considering all possible sets π ⊂ [n], we will restrict ourselves to a smaller class Π ⊂ 2[n]. With this interpretation, one can imagine Π being a combinatorial class, such as the collection of all subsets of [n] of size k, or if we have features, Π could be the sets induced by the set of all linear separators over {xi}. The classification error, or risk of a classifier is given by the expected number of incorrect labels, i.e. R(π) = Pi∼Unif([n]),Yi∼Ber(ηi) (π(i) 6= Yi) = 1 n ( ∑ i 6∈π ηi + ∑ i∈π (1− ηi)) for any π ∈ Π. In the case of persistent noise the above reduces toR(π) = |π∩H0|+|π c∩H1| n = |H1∆π| n where A∆B = (A ∪B)− (A ∩B) for any sets A,B. Problem 1:(Classification) Given a hypothesis class Π ⊆ 2[n] identify π∗ := argmin π∈Π R(π) by requesting as few labels as possible. As described in the introduction, in many situations we are not interested in finding the lowest risk classifier, but instead returning π ∈ Π that contains many discoveries π ∩H1 without too many false alarms π ∩H0. Define ηπ := ∑ i∈π ηx. The false discovery rate (FDR) and true positive rate (TPR) of a set π in the stochastic noise setting are given by FDR(π) := 1− ηπ |π| and TPR(π) := ηπ η[n] In the case of persistent noise, FDR(π) = |H0∩π||π| = 1 − |H1∩π| |π| and TPR(π) = |H1∩π| |H1| . A convenient quantity that we can use to reparametrize these quantities is the true positives: TP (π) :=∑ i∈π ηi. Throughout the following we let Πα = {π ∈ Π : FDR(π) ≤ α}. Problem 2:(Combinatorial FDR Control) Given an α ∈ (0, 1) and hypothesis class Π ⊆ 2[n] identify π∗α = argmax π∈Π,FDR(π)≤α TPR(π) by requesting as few labels as possible. In this work we are agnostic about how η relates to Π, ala [2, 20]. For instance we do not assume the Bayes classifier, argminB∈{0,1}nR(B) is contained in Π. 2 Related Work Active Classification. Active learning for binary classification is a mature field (see surveys [36, 25] and references therein). The major theoretical results of the field can coarsely be partitioned into the streaming setting [2, 6, 20, 26] and the pool-based setting [19, 24, 32], noting that algorithms for the former can be used for the latter, [2], an inspiration for our algorithm, is such an example. These results rely on different complexity measures known as the splitting index, the teaching dimension, and (arguably the most popular) the disagreement coefficient. Computational Considerations. While there have been remarkable efforts to make some of these methods more computationally efficient [6, 26], we believe even given infinite computation, many of these previous works are fundamentally inefficient from a sample complexity perspective. This stems from the fact that when applied to common combinatorial classes (for example the collection of all subsets of size k), these algorithms have sample complexities that are off by at least log(n) factors from the best algorithms for these classes. Consequently, in our work we focus on sample complexity alone, and leave matters of computational efficiency for future work. Other Measures. Given a static dataset, the problem of finding a set or classifier that maximizes TPR subject to FDR-control in the information retrieval community is also known as finding a binary classifier that maximizes recall for a given precision level. There is extensive work on the non-adaptive sample complexity of computing measures related to precision and recall such as AUC, and F-scores [35, 9, 1]. However, there have been just a few works that consider adaptively collecting data with the goal of maximizing recall with precision constraints [34, 5], with the latter work being the most related. We will discuss it further after the statement of our main result. In [34], the problem of adaptively estimating the whole ROC curve for a threshold class is considered under a monotonicity assumption on the true positives; our algorithm is agnostic to this assumption. Combinatorial Bandits: The pure-exploration combinatorial bandit game has been studied for the case of all subsets of [n] of size k known as the Top-K problem [22, 29, 30, 28, 37, 17], the bases of a rank-k matroid (for which Top-K is a particular instance) [18, 23, 15], and in the general case [11, 16]. The combinatorial bandit component of our work (see Section 3.2) is closest to [11]. The algorithm of [11] uses a disagreement-based algorithm in the spirit of Successive Elimination for bandits [22], or the A2 for binary classification [2]. Exploring precisely what log factors are necessary has been an active area. [16] demonstrates a family of instances in which they show in the worst-case, the sample complexity must scale with log(|Π|). However, there are many classes like best-arm identification and matroids where sample complexity does not scale with log(|Π|) (see references above). Our own work provides some insight into what log factors are necessary by presenting our results in terms of VC dimension. In addition, we discuss situtations when a log(n) could potentially be avoided by appealing to Sauer’s lemma in the supplementary material. Multiple Hypothesis Testing. Finally, though this work shares language with the adaptive multiplehypothesis testing literature [12, 27, 42, 40], the goals are different. In that setting, there is a set of n hypothesis tests, where the null is that the mean of each distribution is zero and the alternative is that it is nonzero. [27] designs a procedure that adaptively allocates samples and uses the BenjaminiHochberg procedure [4] on p-values to return an FDR-controlled set. We are not generally interested in finding which individual arms have means that are above a fixed threshold, but instead, given a hypothesis class we want to return an FDR controlled set in the hypothesis class with high TPR. This is the situation in many structured problems in scientific discovery where the set of arms corresponds to an extremely large set of experiments and we have feature vector associated with each arm. We can’t run each one but we may have some hope of identifying a region of the search space which contains many discoveries. In summary, unlike the setting of [27], Π encodes structure among the sets, we do not insist each item is sampled, and we are allowing for persistent labels - overall we are solving a different and novel problem. 3 Pool Based Active Classification We first establish a pool based active classification algorithm that motivates our development of an adaptive algorithm for FDR-control. For each i define µi := 2ηi − 1 ∈ [−1, 1] so ηi = 1+µi2 . By a simple manipulation of the definition of R(π) above we have R(π) = 1 n n∑ i=1 ηi + 1 n ∑ i∈π (2ηi − 1) = 1 n n∑ i=1 ηi − 1 n ∑ i∈π µi so that argmin π∈Π R(π) = argmax π∈Π ∑ i∈π µi. Define µπ := ∑ i∈π µi. If for some i ∈ [n] we map the jth draw of its label Yi,j 7→ 2Yi,j − 1, then E[2Yi,j − 1] = µi and returning an optimal classifier in the set is equivalent to returning π ∈ Π with the largest µπ . Algorithm 1 exploits this. The algorithm maintains a collection of active setsAk ⊆ Π and an active set of items Tk ⊆ [n] which is the symmetric difference of all sets in Ak. To see why we only sample in Tk, if i ∈ ∩π∈Akπ then π and π′ agree on the label of item i, and any contribution of arm i is canceled in each difference µ̂π − µ̂π′ = µ̂π\π′ − µ̂π′\π for all π, π′ ∈ Ak so we should not pay to sample it. In each round sets π with lower empirical means that fall outside of the confidence interval of sets with higher empirical means are removed. There may be some concern that samples from previous rounds are reused. The estimator µ̂π′,k − µ̂π,k = nt ∑t s=1RIt,s(1(Is ∈ π′ \ π)− 1(Is ∈ π \ π′)) depends on all t samples up to the t-th round, each of which is uniformly and independently drawn at each step. Thus each summand is an unbiased estimate of µπ′ − µπ. However, for π, π′ active in round k, as explained above, a summand is only non-zero if Is ∈ π∆π′ ⊂ Tk hence we only need to observe RIt,s if It ∈ Tk so the estimate of µ̂π′,k − µ̂π,k is unbiased. In practice, since the number of samples that land in Tk follow a binomial distribution, instead of using rejection sampling we could instead have drawn a single sample from a binomial distribution and sampled that many uniformly at random from Tk. Input: δ, Π ⊂ 2[n], Confidence bound C(π′, π, t, δ). Let A1 = Π, T1 = (∪π∈A1π)− (∩π∈A1π), k = 1, Ak will be the active sets in round k for t = 1, 2, · · · if t == 2k: Set δk = .5δ/k2. For each π, π′ let µ̂π′,k − µ̂π,k = nt ( ∑t s=1 RIs,s1{Is ∈ π ′ \ π} − ∑t s=1 RIs,s1{Is ∈ π \ π ′}) Set Ak+1 = Ak − { π ∈ Ak : ∃π′ ∈ Akwith µ̂π′,k − µ̂π,k > C(π′, π, t, δk) } . Set Tk+1 = ( ∪π∈Ak+1π ) − ( ∩π∈Ak+1π ) . k ← k + 1 endif Stochastic Noise: If Tk = ∅, Break. Otherwise, draw It uniformly at random from [n] and if It ∈ Tk receive an associated reward RIt,t = 2YIt,t − 1, YIt,t iid∼ Ber(ηIt). Persistent Noise: If Tk = ∅ or t > n, Break. Otherwise, draw It uniformly at random from [n] \ {Is : 1 ≤ s < t} and if It ∈ Tk receive associated reward RIt,t = 2YIt,t − 1, YIt,t = ηIt . Output: π′ ∈ Ak such that µ̂π′,k − µ̂π,k ≥ 0 for all π ∈ Ak \ π′ Algorithm 1: Action Elimination for Active Classification For any A ⊆ 2[n] define V (A) as the VC-dimension of a collection of sets A. Given a family of sets, Π ⊆ 2[n], define B1(k) := {π ∈ Π : |π| = k}, B2(k, π′) := {π ∈ Π : |π∆π′| = k}. Also define the following complexity measures: Vπ := V (B1(|π|)) ∧ |π| and Vπ,π′ := max{V (B2(|π∆π′|, π), V (B2(|π∆π′|, π′))} ∧ |π∆π′| In general Vπ, Vπ,π′ ≤ V (Π). A contribution of our work is the development of confidence intervals that do not depend on a union bound over the class but instead on local VC dimensions. These are described carefully in Lemma 1 in the supplementary materials. Theorem 1 For each i ∈ [n] let µi ∈ [−1, 1] be fixed but unknown and assume {Ri,j}∞j=1 is an i.i.d sequence of random variables such that E[Ri,j ] = µi and Ri,j ∈ [−1, 1]. Define ∆̃π = |µπ − µπ∗ |/|π∆π∗|, and τπ = Vπ,π∗ |π∗∆π| 1 ∆̃2π log ( n log(∆̃−2π )/δ ) . UsingC(π, π′, t, δ) := √ 8|π∆π′|nVπ,π′ log( n δ ) t + 4nVπ,π′ log( n δ ) 3t for a fixed constant c, with probability greater than 1− δ, in the stochastic noise setting Algorithm 1 returns π∗ after a number of samples no more than c ∑n i=1 maxπ∈Π:i∈π∆π∗ τπ and in the persistent noise setting the number of samples needed is no more than c ∑n i=1 min{1,maxπ∈Π:i∈π∆π∗ τπ} Heuristically, the expression 1/|π∆π∗|∆̃2π roughly captures the number of times we would have to sample each i ∈ π∆π∗ to ensure that we can show µπ∗ > µπ. Thus in the more general case, we may expect that we can stop pulling a specific i once each set π such that i ∈ π∆π∗ is removed - accounting for the expression maxπ∈Π,i∈π∆π∗ τπ. The VC-dimension and the logarithmic term in τπ is discussed further below and primarily comes from a careful union bound over the class Π. One always has 1/|π∗∆π| ≤ Vπ,π∗/|π∗∆π| ≤ 1 and both bounds are achievable by different classes Π. In addition, in terms of risk ∆̃π = |µπ−µπ∗ |/|π∆π∗| = n|R(π)−R(π∗)|/|π∆π∗|. Since sampling is done without replacement for persistent noise, there are improved confidence intervals that one can use in that setting described in Lemma 1 in the supplementary materials. Finally, if we had sampled non-adaptively, i.e. without rejection sampling, we would have had a sample complexity of O(nmaxi∈[n] maxπ:Π:i∈π∆π∗ τπ). 3.1 Comparison with previous Active Classification results. One Dimensional Thresholds: In the bound of Theorem 1, a natural question to ask is whether the log(n) dependence can be improved. In the case of nested classes, such as thresholds on a line, we can replace the log(n) with a log log(n) using empirical process theory. This leads to confidence intervals dependent on log log(n) that can be used in place of C(π′, π, t, δ) in Algorithm 1 (see sections C for the confidence intervals and 3.2 for a longer discussion). Under specific noise models we can give a more interpretable sample complexity. Let h ∈ (0, 1], α ≥ 0, z ∈ [0, 1] for some i ∈ [n − 1] and assume that ηi = 12 + sign(z−i/n) 2 h|z − i/n| α so that µi = h|z − i/n|αsign(z − i/n) (this would be a reasonable noise model for topology ααα in the introduction). Let Π = {[k] : k ≤ n}. In this case, inspecting the dominating term of Theorem 1 for i ∈ π∗ we have arg maxπ∈Π:i∈π∆π∗ Vπ,π∗ |π∆π∗| 1 ∆̃2π = [i] and takes a value of ( 1+α h )2 n−1(z − i/n)−2α−1. Upper bounding the other terms and summing, the sample complexities can be calculated to be O(log(n) log(log(n)/δ)/h2) if α = 0, and O(n2α log(log(n)/δ)/h2) if α > 0. These rates match the minimax lower bound rates given in [13] up to log log factors. Unlike the algorithms given there, our algorithm works in the agnostic setting, i.e. it is making no assumptions about whether the Bayes classifier is in the class. In the case of non-adaptive sampling, the sum is replaced with the max times n yielding n2α+1 log(log(n)/δ)/h2 which is substantially worse than adaptive sampling. Comparison to previous algorithms: One of the foundational works on active learning is the DHM algorithm of [20] and the A2 algorithm that preceded it [2]. Similar in spirit to our algorithm, DHM requests a label only when it is uncertain how π∗ would label the current point. In general the analysis of the DHM algorithm can not characterize the contribution of each arm to the overall sample complexity leading to sub-optimal sample complexity for combinatorial classes. For example in the the case when Π = {[i]}ni=1, with i∗ = arg maxi∈[n] µi, ignoring logarithmic factors, one can show for this problem the bound of Theorem 1 of [20] scales like n2 maxi 6=i∗(µi∗ − µ−2i ) which is substantially worse than our bound for this problem which scales like ∑ i 6=i∗ ∆ −2 i . Similar arguments can be made for other combinatorial classes such as all subsets of size k. While we are not particularly interested in applying algorithms like DHM to this specific problem, we note that the style of its analysis exposes such a gross inconsistency with past analyses of the best known algorithms that the approach leaves much to be desired. For more details, please see A.2 in the supplementary materials. 3.2 Connections to Combinatorial Bandits A closely related problem to classification is the pure-exploration combinatorial bandit problem. As above we have access to a set of arms [n], and associated to each arm is an unknown distribution νi with support in [−1, 1] - which is arbitrary not just a Bernoulli label distribution. We let {Ri,j}∞j=1 be a sequence of random variables where Ri,j ∼ νi is the jth (i.i.d.) draw from νi satisfying E[Ri,j ] = µi ∈ [−1, 1]. In the persistent noise setting we assume that νi is a point mass at µi ∈ [−1, 1]. Given a collection of sets Π ⊆ 2[n], for each π ∈ Π we define µπ := ∑ i∈π µi the sum of means in π. The pure-exploration for combinatorial bandit problem asks, given a hypothesis class Π ⊆ 2[n] identify π∗ = argmax π∈Π µπ by requesting as few labels as possible. The combinatorial bandit extends many problems considered in the multi-armed bandit literature. For example setting Π = {{i} : i ∈ [n]} is equivalent to the best-arm identification problem. The discussion at the start of Section 3 shows that the classification problem can be mapped to combinatorial bandits - indeed minimizing the 0/1 loss is equivalent to maximizing µπ. In fact, Algorithm 1 gives state of the art results for the pure exploration combinatorial bandit problem and furthermore Theorem 1 holds verbatim. Algorithm 1 is similar to previous action elimination algorithms for combinatorial bandits in the literature, e.g. Algorithm 4 in [11]. However, unlike previous algorithms, we do not insist on sampling each item once, an unrealistic requirement for classification settings - indeed, not having this constraint allows us to reach minimax rates for classification in one dimensions as discussed above. In addition, this resolves a concern brought up in [11] for elimination being used for PAC-learning. We prove Theorem 1 in this more general setting in the supplementary materials, see A.3. The connection between FDR control and combinatorial bandits is more direct: we are seeking to find π ∈ Π with maximum ηπ subject to FDR-constraints. This already highlights a key difference Input: Confidence bounds C1(π, t, δ), C2(π, π′, t, δ) Ak ⊂ Π will be the set of active sets in round k. Ck ⊂ Π is the set of FDR-controlled policies in round k. A1 = Π, C1 = ∅, S1 = ∪π∈Ππ, T1 = ⋃ π∈Π π − ⋂ π∈Π π, k = 1. for t = 1, 2, · · · if t = 2k: Let δk = .25δ/k2 For each set π ∈ Ak, and each pair π′, π ∈ Ak update the estimates: F̂DR(π) := 1− n|π|t ∑t s=1 YIs,s1{Is ∈ π} T̂P (π′)− T̂P (π) := n t (∑t s=1 Y ′ Js,s1{Js ∈ π ′\π} − ∑t s=1 Y ′ Js,s1{Js ∈ π\π ′} ) Set Ck+1 = Ck ∪ {π ∈ Ak \ Ck : F̂DR(π) + C1(π, t, δk)/|π| ≤ α} Set Ak+1 = Ak Remove any π from Ak+1 and Ck+1 such that one of the conditions is true: 1. F̂DR(π)− C1(π, t, δk)/|π| > α 2. ∃π′ ∈ Ck+1 with T̂P (π′)− T̂P (π) > C2(π, π′, t, δk) and add π to a set R Remove any π from Ak+1 and Ck+1 such that: 3. ∃π′ ∈ Ck+1 ∪R, such that π ⊂ π′. Set Sk+1 := ⋃ π∈Ak+1\Ck+1 π, and Tk+1 = ⋃ π∈Ak+1 π − ⋂ π∈Ak+1 π. k ← k + 1 endif Stochastic Noise: if |Ak| = 1, Break. Otherwise: Sample It ∼ Unif([n]). If It ∈ Sk, then receive a label YIt,t ∼ Ber(ηIt). Sample Jt ∼ Unif([n]). If Jt ∈ Tk, then receive a label Y ′Jt,t ∼ Ber(ηJt). Persistent Noise: If |Ak| = 1 or t > n, Break. Otherwise: Sample It ∼ [n]\{Is : 1 ≤ s < t}. If It ∈ Sk, then receive a label YIt,t = ηIt . Sample Jt ∼ [n]\{Js : 1 ≤ s < t}. If Jt ∈ Tk, then receive a label Y ′Jt,t = ηJt . Return maxt∈Ck+1 T̂P (π) Algorithm 2: Active FDR control in persistent and bounded noise settings. between classification and FDR-control. In one we choose to sample to maximize ηπ subject to FDR constraints where each ηi ∈ [0, 1], whereas in classification we are trying to maximize µπ where each µi ∈ [−1, 1]. A major consequence of this difference is that ηπ ≤ ηπ′ whenever π ⊆ π′, but such a condition does not hold for µπ, µπ′ . Motivating the sample complexity: As mentioned above, the general combinatorial bandit problem is considered in [11]. There they present an algorithm with sample complexity, C n∑ i=1 max π:i∈π∆π∗ 1 |π∆π∗| 1 ∆̃2π log ( max(|B(|π∆π∗|, π)|, |B(|π∆π∗|, π∗)|)n δ ) This complexity parameter is difficult to interpret directly so we compare it to one more familiar in statistical learning - the VC dimension. To see how this sample complexity relates to ours in Theorem 1, note that log2 |B(k, π∗)| ≤ log2 ( n k ) . k log2(n). Thus by the Sauer-Shelah lemma, V (B(r, π∗)) . log2(|B(r, π∗)|) . min{V (B(r, π∗)), r} log2(n) where . hides a constant. The proof of the confidence intervals in the supplementary effectively combines these two facts along with a union bound over all sets in B(r, π∗). 4 Combinatorial FDR Control Algorithm 2 provides an active sampling method for determining π ∈ Π with FDR(π) ≤ α and maximal TPR, which we denote as π∗α. Since TPR(π) = TP (π)/η[n], we can ignore the denominator and so maximizing the TPR is the same as maximizing TP . The algorithm proceeds in epochs. At all times a collection Ak ⊆ Π of active sets is maintained along with a collection of FDRcontrolled sets Ck ⊆ Ak. In each time step, random indexes It and Jt are sampled from the union Sk = ∪π∈Ak\Ckπ and the symmetric difference Tk = ∪π∈Akπ − ∩π∈Akπ respectively. Associated random labels YIt,t, YJt,t ∈ {0, 1} are then obtained from the underlying label distributions Ber(ηIt) and Ber(ηJt). At the start of each epoch, any set with a FDR that is statistically known to be under α is added to Ck, and any sets whose FDR are greater than α are removed from Ak in condition 1. Similar to the active classification algorithm of Figure 1, a set π ∈ Ak is removed in condition 2 if TP (π) is shown to be statistically less than TP (π′) for some π′ ∈ Ck that, crucially, is FDR controlled. In general there may be many sets π ∈ Π such that TP (π) > TP (π∗α) that are not FDR-controlled. Finally in condition 3, we exploit the positivity of the ηi’s: if π ⊂ π′ then deterministically TP (π) ≤ TP (π′), so if π′ is FDR controlled it can be used to eliminate π. The choice of Tk is motivated by active classification: we only need to sample in the symmetric difference. To determine which sets are FDR-controlled it is important that we sample in the entirety of the union of all π ∈ Ak \ Ck, not just the symmetric difference of the Ak, which motivates the choice of Sk. In practical experiments persistent noise is not uncommon and avoids the potential for unbounded sample complexities that potentially occur when FDR(π) ≈ α. Figure 2 demonstrates a model run of the algorithm in the case of five sets Π = {π1, . . . , π5}. Recall that Πα is the subset of Π that is FDR-controlled so that π∗α = arg maxπ∈Πα TP (π). The following gives a sample complexity result for the number of rounds before the algorithm terminates. Theorem 2 Assume that for each i ≤ n there is an associated ηi ∈ [0, 1] and {Yi,j}∞j=1 is an i.i.d. sequence of random variables such that Yi,j ∼ Ber(ηi). For any π ∈ Π define ∆π,α = |FDR(π)−α|, and ∆̃π = |TP (π∗α)− TP (π)|/|π∆π∗| = |TP (π∗α \ π)− TP (π \ π∗α)|/|π∆π∗|, and sFDRπ = Vπ |π| 1 ∆2π,α log ( n log(∆−2π,α)/δ ) , sTPπ = Vπ,π∗α |π∆π∗α| 1 ∆̃2π log ( n log(∆̃−2π )/δ ) In addition define TFDRπ = min{sFDRπ , max{sTPπ , sFDRπ∗α }, minπ′∈Πα π⊂π′ sFDRπ′ } and TTPπ = min{max{sTPπ , sFDRπ∗α }, minπ′∈Πα π⊂π′ sFDRπ′ }. Using C1(π, t, δ) := √ 4|π|nVπ log(nδ ) t + 4nVπ log(nδ ) 3t and C2 = C for C defined in Theorem 1, for a fixed constant c, with probability at least 1− δ, in the stochastic noise setting Algorithm 2 returns π∗α after a number of samples no more than c n∑ i=1 max π∈Π:i∈π TFDRπ︸ ︷︷ ︸ FDR−Control +c n∑ i=1 max π∈Πα:i∈π∆π∗α TTPπ︸ ︷︷ ︸ TPR−Elimination and in the persistent noise setting returns π∗α after no more than c ∑n i=1 min { 1, ( maxπ∈Π:i∈π T FDR π + maxπ∈Πα:i∈π∆π∗α T TP π )} Though this result is complicated, each term is understood by considering each way a set can be removed and the time at which an arm i will stop being sampled. Effectively the sample complexity decomposes into two parts, the complexity of showing that a set is FDR-controlled or not, and how long it takes to eliminate it based on TPR. To motivate sFDRπ , if we have a single set π then 1/(|π|∆2π,α) roughly captures the number of times we have to sample each element in π to decide whether it is FDR-controlled or not - so in particular in the general case we have to roughly sample an arm i, maxπ∈Π,i∈π sπ times. However, we can remove a set before showing it is FDR controlled using other conditions which TFDRπ captures. The term in the sample complexity for elimination using TPR is similarly motivated. We now unpack the underbraced terms more carefully simultaneously explaining the sample complexity and the motivation for the proof of Theorem 2. Sample Complexity of FDR-Control In any round where there exists a set π ∈ Ak \ Ck with arm i ∈ π, i.e. π is not yet FDR controlled, there is the potential for sampling i ∈ Sk. A set π only leaves Ak if i) it is shown to not be FDR controlled (condition 1 of the algorithm), ii) because an FDR controlled set eliminates it on the basis of TP (condition 2), or iii) it is contained in an FDR controlled set (condition 3). These three cases reflect the three arguments of the min in the defined quantity TFDRπ , respectively. Taking the maximum over all sets containing an arm i and summing over all i gives the total FDR-control term. This is a large savings relative to naive non-adaptive algorithms that sample until every set π in Π was FDR controlled which would take O(nmaxπ∈Π sFDRπ ) samples. Sample Complexity of TPR-Elimination An FDR-controlled set π ∈ Πα is only removed from Ck when eliminated by an FDR-controlled set with higher TP or if it is removed because it is contained in an FDR-controlled set. In general we can upper bound the former time by the samples needed for π∗α to eliminate π once we know π ∗ α is FDR controlled - this gives rise to maxπ∈Πα:i∈π∆π∗α T TP π . Note that sets are removed in a procedure mimicking active classification and so the active gains there apply to this setting as well. A naive passive algorithm that continues to sample until both the FDR of every set is determined, and π∗α has higher TP than every other FDR-controlled set gives a significantly worse sample complexity of O(nmax{maxπ∈Πα sFDRπ ,maxπ 6∈Πα sTPπ }). Comparison with [5]. Similar to our proposed algorithm, [5] samples in the union of all active sets and maintains statistics on the empirical FDR of each set, along the way removing sets that are not FDR-controlled or have lower TPR than an FDR-controlled set. However, they fail to sample in the symmetric difference, missing an important link between FDR-control and active classification. In particular, the confidence intervals they use are far looser as a result. They also only consider the case of persistent noise. Their proven sample complexity results are no better than those achieved by the passive algorithm that samples each item uniformly, which is precisely the sample complexity described at the end of the previous paragraph. One Dimensional Thresholds Consider a stylized modeling of the topology βαββ from the introduction in the persistent noise setting where Π = {[t] : t ≤ n}, ηi ∼ Ber(β1{i ≤ z}) with β < .5, and z ∈ [n] is assumed to be small, i.e., we assume that there is only a small region in which positive labels can be found and the Bayes classifier is just to predict 0 for all points. Assuming α > 1− β, one can show the sample complexity of Algorithm 2 satisfiesO((1−α)−2(log(n/(1−α))+(1+β)z/(1−α))) while any naive non-adaptive sampling strategy will take at least O(n) samples. Implementation. For simple classes Π such as thresholds or axis aligned rectangles, our algorithm can be made computationally efficient. But for more complex classes there may be a wide gap between theory and practice, just as in classification [36, 20]. However, the algorithm motivates two key ideas - sample in the union of potentially good sets to learn which are FDR controlled, and sample in the symmetric difference to eliminate sets. The latter insight was originally made by A2 in the case of classification and has justified heuristics such as uncertainty sampling [36]. Developing analogous heuristics for the former case of FDR-control is an exciting avenue of future work.
1. What is the focus of the paper in terms of active learning criteria? 2. What are the connections drawn by the authors between their approach and other related lines of work? 3. What are the technical results shown by the authors in terms of sample complexity? 4. How does the reviewer assess the clarity and quality of the paper's writing and presentation? 5. Are there any suggestions provided by the reviewer to enhance the paper's content or presentation?
Review
Review This manuscript considers a novel active learning criterion: namely one of maximizing the true positive rate (TPR) subject to a constraint on the false discovery rate (FDR). The authors draw connections between this paradigm and several existing lines of work including combinatorial bandits and active classification. The technical results appear sound and interesting. The authors show sample complexity results for FDR control and for pool-based active learning in terms of novel complexity measures. The paper is predominantly clearly written. However, I have some minor comments that could aid in enhancing the quality of the presentation: - The notation, especially in the statement of Theorem 2, is quite difficult to parse. I would encourage the authors to consider ways to aid readability. - Please provide a sketch of the proof in the main text that highlights the main techniques and difficulties. I think this manuscript addresses an interesting problem and the presents novel theoretical results and techniques; I think this paper will make a great addition to NeurIPS 19.
NIPS
Title A New Perspective on Pool-Based Active Classification and False-Discovery Control Abstract In many scientific settings there is a need for adaptive experimental design to guide the process of identifying regions of the search space that contain as many true positives as possible subject to a low rate of false discoveries (i.e. false alarms). Such regions of the search space could differ drastically from a predicted set that minimizes 0/1 error and accurate identification could require very different sampling strategies. Like active learning for binary classification, this experimental design cannot be optimally chosen a priori, but rather the data must be taken sequentially and adaptively. However, unlike classification with 0/1 error, collecting data adaptively to find a set with high true positive rate and low false discovery rate (FDR) is not as well understood. In this paper we provide the first provably sample efficient adaptive algorithm for this problem. Along the way we highlight connections between classification, combinatorial bandits, and FDR control making contributions to each. 1 Introduction As machine learning has become ubiquitous in the biological, chemical, and material sciences, it has become irresistible to use these techniques not only for making inferences about previously collected data, but also for guiding the data collection process, closing the loop on inference and data collection [10, 38, 41, 39, 33, 31]. However, though collecting data randomly or non-adaptively can be inefficient, ill-informed ways of collecting data adaptively can be catastrophic: a procedure could collect some data, adopt an incorrect belief, collect more data based on this belief, and leave the practitioner with insufficient data in the right places to infer anything with confidence. In a recent high-throughput protein synthesis experiment [33], thousands of short amino acid sequences (length less than 60) were evaluated with the goal of identifying and characterizing a subset of the pool of all possible sequences ( ≈ 1080) containing many sequences that will fold into stable proteins. That is, given an evaluation budget that is just a minuscule proportion of the total number of sequences, the researchers sought to make predictions about individual sequences that would never be evaluated. An initial first round of sequences uniformly sampled from a predefined subset were synthesized to observe whether each sequence was in the set of sequences that will fold,H1, or inH0 = Hc1. Treating this as a classification problem, a linear logistic regression classifier was trained, using these labels and physics based features. Then a set of sequences to test in the next round were chosen to maximize the probability of folding according to this empirical model - a procedure repeated twice more. This strategy suffers two flaws. First, selecting a set to maximize the likelihood of hits given past rounds’ data is effectively using logistic regression to perform optimization similar to follow-the-leader strategies [14]. While more of the sequences evaluated may fold, these observations may provide little information about whether sequences that were not evaluated will fold or not. Second, while it is natural to employ logistic regression or the SVM 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. to discriminate between binary outcomes (e.g., fold/not-fold), in many scientific applications the property of interest is incredibly rare and an optimal classifier will just predict a single class e.g. not fold. This is not only an undesirable inference for prediction, but a useless signal for collecting data to identify those regions with higher, but still unlikely, probabilities of folding. Consider the data of [33] reproduced in Figure 1, where the proportion of sequences that fold along with their distributions for a particularly informative feature (Buried NPSA) are shown in each round for two different protein topologies (notated βαββ and ααα). In the last column of Figure 1, even though most of the sequences evaluated are likely to fold, we are sampling in a small part of the overall search space. This limits our overall ability to identify under-explored regions that could potentially contain many sequences that fold, even though the logistic model does not achieve its maximum there. On the other hand, in the top plot of Figure 1, sequences with topology βαββ (shown in blue) so rarely folded that a near-optimal classifier would predict “not fold” for every sequence. Instead of using a procedure that seeks to maximize the probability of folding or classifying sequences as fold or not-fold, a more natural objective is to predict a set of sequences π in such a way as to maximize the true positive rate (TPR) |H1 ∩π|/|H1| while minimizing the false discovery rate (FDR) i.e. |H0 ∩ π|/|π|. That is, π is chosen to contain a large number of sequences that fold while the proportion of false-alarms among those predicted is relatively small. For example, if a set π for βαββ was found that maximized TPR subject to FDR being less than 9/10 then π would be non-empty with the guarantee that at least one in every 10 suggestions was a true-positive; not ideal, but making the best of a bad situation. In some settings, such as for topology ααα (shown in orange), training a classifier to minimize 0/1 loss may be reasonable. Of course, before seeing any data we would not know whether classification is a good objective so it is far more conservative to optimize for maximizing the number of discoveries. Contributions. We propose the first provably sample-efficient adaptive sampling algorithm for maximizing TPR subject to an FDR constraint. This problem has deep connections to active binary classification (e.g., active learning) and pure-exploration for combinatorial bandits that are necessary steps towards motivating our algorithm. We make the following contributions: 1. We improve upon state of the art sample complexity for pool-based active classification in the agnostic setting providing novel sample complexity bounds that do not depend on the disagreementcoefficient for sampling with or without replacement. Our bounds are more granular than previous results as they describe the contribution of a single example to the overall sample complexity. 2. We highlight an important connection between active classification and combinatorial bandits. Our results follow directly from our improvements to the state of the art in combinatorial bandits, extending methods to be near-optimal for classes that go beyond matroids where one need not sample every arm at least once. 3. Our main contribution is the development and analysis of an adaptive sampling algorithm that minimizes the number of samples to identify the set that maximizes the true positive rate subject to a false discovery constraint. To the best of our knowledge, this is the first work to demonstrate a sample complexity for this problem that is provably better than non-adaptive sampling. 1.1 Pool Based Classification and FDR Control Here we describe what is known as the pool-based setting for active learning with stochastic labels. Throughout the following we assume access to a finite set of items [n] = {1, · · · , n} with an associated label space {0, 1}. The items can be fixed vectors {xi}ni=1 ∈ Rd but we do not restrict to this case. Associated to each i ∈ [n] there is a Bernoulli distribution Ber(ηi) with ηi ∈ [0, 1]. We imagine a setting where in each round a player chooses It ∈ [n] and observes an independent random variable YIt,t. For any i, Yi,t ∼ Ber(ηi) are i.i.d. Borrowing from the multi-armed bandit literature, we may also refer to the items as arms, and pulling an arm is receiving a sample from its corresponding label distribution. We will refer to this level of generality as the stochastic noise setting. The case when ηi ∈ {0, 1}, i.e. each point i ∈ [n] has a deterministic label Yi,j = ηi for all j ≥ 1, will be referred to as the persistent noise setting. In this setting we can define H1 = {i : ηi = 1},H0 = [n] \ H1. This is a natural setting if the experimental noise is negligible so that performing the same measurement multiple times gives the same result. A classifier is a decision rule f : [n]→ {0, 1} that assigns each item i ∈ [n] a fixed label. We can identify any such decision rule with the set of items it maps to 1, i.e. the set π = {i : i ∈ [n], f(i) = 1}. Instead of considering all possible sets π ⊂ [n], we will restrict ourselves to a smaller class Π ⊂ 2[n]. With this interpretation, one can imagine Π being a combinatorial class, such as the collection of all subsets of [n] of size k, or if we have features, Π could be the sets induced by the set of all linear separators over {xi}. The classification error, or risk of a classifier is given by the expected number of incorrect labels, i.e. R(π) = Pi∼Unif([n]),Yi∼Ber(ηi) (π(i) 6= Yi) = 1 n ( ∑ i 6∈π ηi + ∑ i∈π (1− ηi)) for any π ∈ Π. In the case of persistent noise the above reduces toR(π) = |π∩H0|+|π c∩H1| n = |H1∆π| n where A∆B = (A ∪B)− (A ∩B) for any sets A,B. Problem 1:(Classification) Given a hypothesis class Π ⊆ 2[n] identify π∗ := argmin π∈Π R(π) by requesting as few labels as possible. As described in the introduction, in many situations we are not interested in finding the lowest risk classifier, but instead returning π ∈ Π that contains many discoveries π ∩H1 without too many false alarms π ∩H0. Define ηπ := ∑ i∈π ηx. The false discovery rate (FDR) and true positive rate (TPR) of a set π in the stochastic noise setting are given by FDR(π) := 1− ηπ |π| and TPR(π) := ηπ η[n] In the case of persistent noise, FDR(π) = |H0∩π||π| = 1 − |H1∩π| |π| and TPR(π) = |H1∩π| |H1| . A convenient quantity that we can use to reparametrize these quantities is the true positives: TP (π) :=∑ i∈π ηi. Throughout the following we let Πα = {π ∈ Π : FDR(π) ≤ α}. Problem 2:(Combinatorial FDR Control) Given an α ∈ (0, 1) and hypothesis class Π ⊆ 2[n] identify π∗α = argmax π∈Π,FDR(π)≤α TPR(π) by requesting as few labels as possible. In this work we are agnostic about how η relates to Π, ala [2, 20]. For instance we do not assume the Bayes classifier, argminB∈{0,1}nR(B) is contained in Π. 2 Related Work Active Classification. Active learning for binary classification is a mature field (see surveys [36, 25] and references therein). The major theoretical results of the field can coarsely be partitioned into the streaming setting [2, 6, 20, 26] and the pool-based setting [19, 24, 32], noting that algorithms for the former can be used for the latter, [2], an inspiration for our algorithm, is such an example. These results rely on different complexity measures known as the splitting index, the teaching dimension, and (arguably the most popular) the disagreement coefficient. Computational Considerations. While there have been remarkable efforts to make some of these methods more computationally efficient [6, 26], we believe even given infinite computation, many of these previous works are fundamentally inefficient from a sample complexity perspective. This stems from the fact that when applied to common combinatorial classes (for example the collection of all subsets of size k), these algorithms have sample complexities that are off by at least log(n) factors from the best algorithms for these classes. Consequently, in our work we focus on sample complexity alone, and leave matters of computational efficiency for future work. Other Measures. Given a static dataset, the problem of finding a set or classifier that maximizes TPR subject to FDR-control in the information retrieval community is also known as finding a binary classifier that maximizes recall for a given precision level. There is extensive work on the non-adaptive sample complexity of computing measures related to precision and recall such as AUC, and F-scores [35, 9, 1]. However, there have been just a few works that consider adaptively collecting data with the goal of maximizing recall with precision constraints [34, 5], with the latter work being the most related. We will discuss it further after the statement of our main result. In [34], the problem of adaptively estimating the whole ROC curve for a threshold class is considered under a monotonicity assumption on the true positives; our algorithm is agnostic to this assumption. Combinatorial Bandits: The pure-exploration combinatorial bandit game has been studied for the case of all subsets of [n] of size k known as the Top-K problem [22, 29, 30, 28, 37, 17], the bases of a rank-k matroid (for which Top-K is a particular instance) [18, 23, 15], and in the general case [11, 16]. The combinatorial bandit component of our work (see Section 3.2) is closest to [11]. The algorithm of [11] uses a disagreement-based algorithm in the spirit of Successive Elimination for bandits [22], or the A2 for binary classification [2]. Exploring precisely what log factors are necessary has been an active area. [16] demonstrates a family of instances in which they show in the worst-case, the sample complexity must scale with log(|Π|). However, there are many classes like best-arm identification and matroids where sample complexity does not scale with log(|Π|) (see references above). Our own work provides some insight into what log factors are necessary by presenting our results in terms of VC dimension. In addition, we discuss situtations when a log(n) could potentially be avoided by appealing to Sauer’s lemma in the supplementary material. Multiple Hypothesis Testing. Finally, though this work shares language with the adaptive multiplehypothesis testing literature [12, 27, 42, 40], the goals are different. In that setting, there is a set of n hypothesis tests, where the null is that the mean of each distribution is zero and the alternative is that it is nonzero. [27] designs a procedure that adaptively allocates samples and uses the BenjaminiHochberg procedure [4] on p-values to return an FDR-controlled set. We are not generally interested in finding which individual arms have means that are above a fixed threshold, but instead, given a hypothesis class we want to return an FDR controlled set in the hypothesis class with high TPR. This is the situation in many structured problems in scientific discovery where the set of arms corresponds to an extremely large set of experiments and we have feature vector associated with each arm. We can’t run each one but we may have some hope of identifying a region of the search space which contains many discoveries. In summary, unlike the setting of [27], Π encodes structure among the sets, we do not insist each item is sampled, and we are allowing for persistent labels - overall we are solving a different and novel problem. 3 Pool Based Active Classification We first establish a pool based active classification algorithm that motivates our development of an adaptive algorithm for FDR-control. For each i define µi := 2ηi − 1 ∈ [−1, 1] so ηi = 1+µi2 . By a simple manipulation of the definition of R(π) above we have R(π) = 1 n n∑ i=1 ηi + 1 n ∑ i∈π (2ηi − 1) = 1 n n∑ i=1 ηi − 1 n ∑ i∈π µi so that argmin π∈Π R(π) = argmax π∈Π ∑ i∈π µi. Define µπ := ∑ i∈π µi. If for some i ∈ [n] we map the jth draw of its label Yi,j 7→ 2Yi,j − 1, then E[2Yi,j − 1] = µi and returning an optimal classifier in the set is equivalent to returning π ∈ Π with the largest µπ . Algorithm 1 exploits this. The algorithm maintains a collection of active setsAk ⊆ Π and an active set of items Tk ⊆ [n] which is the symmetric difference of all sets in Ak. To see why we only sample in Tk, if i ∈ ∩π∈Akπ then π and π′ agree on the label of item i, and any contribution of arm i is canceled in each difference µ̂π − µ̂π′ = µ̂π\π′ − µ̂π′\π for all π, π′ ∈ Ak so we should not pay to sample it. In each round sets π with lower empirical means that fall outside of the confidence interval of sets with higher empirical means are removed. There may be some concern that samples from previous rounds are reused. The estimator µ̂π′,k − µ̂π,k = nt ∑t s=1RIt,s(1(Is ∈ π′ \ π)− 1(Is ∈ π \ π′)) depends on all t samples up to the t-th round, each of which is uniformly and independently drawn at each step. Thus each summand is an unbiased estimate of µπ′ − µπ. However, for π, π′ active in round k, as explained above, a summand is only non-zero if Is ∈ π∆π′ ⊂ Tk hence we only need to observe RIt,s if It ∈ Tk so the estimate of µ̂π′,k − µ̂π,k is unbiased. In practice, since the number of samples that land in Tk follow a binomial distribution, instead of using rejection sampling we could instead have drawn a single sample from a binomial distribution and sampled that many uniformly at random from Tk. Input: δ, Π ⊂ 2[n], Confidence bound C(π′, π, t, δ). Let A1 = Π, T1 = (∪π∈A1π)− (∩π∈A1π), k = 1, Ak will be the active sets in round k for t = 1, 2, · · · if t == 2k: Set δk = .5δ/k2. For each π, π′ let µ̂π′,k − µ̂π,k = nt ( ∑t s=1 RIs,s1{Is ∈ π ′ \ π} − ∑t s=1 RIs,s1{Is ∈ π \ π ′}) Set Ak+1 = Ak − { π ∈ Ak : ∃π′ ∈ Akwith µ̂π′,k − µ̂π,k > C(π′, π, t, δk) } . Set Tk+1 = ( ∪π∈Ak+1π ) − ( ∩π∈Ak+1π ) . k ← k + 1 endif Stochastic Noise: If Tk = ∅, Break. Otherwise, draw It uniformly at random from [n] and if It ∈ Tk receive an associated reward RIt,t = 2YIt,t − 1, YIt,t iid∼ Ber(ηIt). Persistent Noise: If Tk = ∅ or t > n, Break. Otherwise, draw It uniformly at random from [n] \ {Is : 1 ≤ s < t} and if It ∈ Tk receive associated reward RIt,t = 2YIt,t − 1, YIt,t = ηIt . Output: π′ ∈ Ak such that µ̂π′,k − µ̂π,k ≥ 0 for all π ∈ Ak \ π′ Algorithm 1: Action Elimination for Active Classification For any A ⊆ 2[n] define V (A) as the VC-dimension of a collection of sets A. Given a family of sets, Π ⊆ 2[n], define B1(k) := {π ∈ Π : |π| = k}, B2(k, π′) := {π ∈ Π : |π∆π′| = k}. Also define the following complexity measures: Vπ := V (B1(|π|)) ∧ |π| and Vπ,π′ := max{V (B2(|π∆π′|, π), V (B2(|π∆π′|, π′))} ∧ |π∆π′| In general Vπ, Vπ,π′ ≤ V (Π). A contribution of our work is the development of confidence intervals that do not depend on a union bound over the class but instead on local VC dimensions. These are described carefully in Lemma 1 in the supplementary materials. Theorem 1 For each i ∈ [n] let µi ∈ [−1, 1] be fixed but unknown and assume {Ri,j}∞j=1 is an i.i.d sequence of random variables such that E[Ri,j ] = µi and Ri,j ∈ [−1, 1]. Define ∆̃π = |µπ − µπ∗ |/|π∆π∗|, and τπ = Vπ,π∗ |π∗∆π| 1 ∆̃2π log ( n log(∆̃−2π )/δ ) . UsingC(π, π′, t, δ) := √ 8|π∆π′|nVπ,π′ log( n δ ) t + 4nVπ,π′ log( n δ ) 3t for a fixed constant c, with probability greater than 1− δ, in the stochastic noise setting Algorithm 1 returns π∗ after a number of samples no more than c ∑n i=1 maxπ∈Π:i∈π∆π∗ τπ and in the persistent noise setting the number of samples needed is no more than c ∑n i=1 min{1,maxπ∈Π:i∈π∆π∗ τπ} Heuristically, the expression 1/|π∆π∗|∆̃2π roughly captures the number of times we would have to sample each i ∈ π∆π∗ to ensure that we can show µπ∗ > µπ. Thus in the more general case, we may expect that we can stop pulling a specific i once each set π such that i ∈ π∆π∗ is removed - accounting for the expression maxπ∈Π,i∈π∆π∗ τπ. The VC-dimension and the logarithmic term in τπ is discussed further below and primarily comes from a careful union bound over the class Π. One always has 1/|π∗∆π| ≤ Vπ,π∗/|π∗∆π| ≤ 1 and both bounds are achievable by different classes Π. In addition, in terms of risk ∆̃π = |µπ−µπ∗ |/|π∆π∗| = n|R(π)−R(π∗)|/|π∆π∗|. Since sampling is done without replacement for persistent noise, there are improved confidence intervals that one can use in that setting described in Lemma 1 in the supplementary materials. Finally, if we had sampled non-adaptively, i.e. without rejection sampling, we would have had a sample complexity of O(nmaxi∈[n] maxπ:Π:i∈π∆π∗ τπ). 3.1 Comparison with previous Active Classification results. One Dimensional Thresholds: In the bound of Theorem 1, a natural question to ask is whether the log(n) dependence can be improved. In the case of nested classes, such as thresholds on a line, we can replace the log(n) with a log log(n) using empirical process theory. This leads to confidence intervals dependent on log log(n) that can be used in place of C(π′, π, t, δ) in Algorithm 1 (see sections C for the confidence intervals and 3.2 for a longer discussion). Under specific noise models we can give a more interpretable sample complexity. Let h ∈ (0, 1], α ≥ 0, z ∈ [0, 1] for some i ∈ [n − 1] and assume that ηi = 12 + sign(z−i/n) 2 h|z − i/n| α so that µi = h|z − i/n|αsign(z − i/n) (this would be a reasonable noise model for topology ααα in the introduction). Let Π = {[k] : k ≤ n}. In this case, inspecting the dominating term of Theorem 1 for i ∈ π∗ we have arg maxπ∈Π:i∈π∆π∗ Vπ,π∗ |π∆π∗| 1 ∆̃2π = [i] and takes a value of ( 1+α h )2 n−1(z − i/n)−2α−1. Upper bounding the other terms and summing, the sample complexities can be calculated to be O(log(n) log(log(n)/δ)/h2) if α = 0, and O(n2α log(log(n)/δ)/h2) if α > 0. These rates match the minimax lower bound rates given in [13] up to log log factors. Unlike the algorithms given there, our algorithm works in the agnostic setting, i.e. it is making no assumptions about whether the Bayes classifier is in the class. In the case of non-adaptive sampling, the sum is replaced with the max times n yielding n2α+1 log(log(n)/δ)/h2 which is substantially worse than adaptive sampling. Comparison to previous algorithms: One of the foundational works on active learning is the DHM algorithm of [20] and the A2 algorithm that preceded it [2]. Similar in spirit to our algorithm, DHM requests a label only when it is uncertain how π∗ would label the current point. In general the analysis of the DHM algorithm can not characterize the contribution of each arm to the overall sample complexity leading to sub-optimal sample complexity for combinatorial classes. For example in the the case when Π = {[i]}ni=1, with i∗ = arg maxi∈[n] µi, ignoring logarithmic factors, one can show for this problem the bound of Theorem 1 of [20] scales like n2 maxi 6=i∗(µi∗ − µ−2i ) which is substantially worse than our bound for this problem which scales like ∑ i 6=i∗ ∆ −2 i . Similar arguments can be made for other combinatorial classes such as all subsets of size k. While we are not particularly interested in applying algorithms like DHM to this specific problem, we note that the style of its analysis exposes such a gross inconsistency with past analyses of the best known algorithms that the approach leaves much to be desired. For more details, please see A.2 in the supplementary materials. 3.2 Connections to Combinatorial Bandits A closely related problem to classification is the pure-exploration combinatorial bandit problem. As above we have access to a set of arms [n], and associated to each arm is an unknown distribution νi with support in [−1, 1] - which is arbitrary not just a Bernoulli label distribution. We let {Ri,j}∞j=1 be a sequence of random variables where Ri,j ∼ νi is the jth (i.i.d.) draw from νi satisfying E[Ri,j ] = µi ∈ [−1, 1]. In the persistent noise setting we assume that νi is a point mass at µi ∈ [−1, 1]. Given a collection of sets Π ⊆ 2[n], for each π ∈ Π we define µπ := ∑ i∈π µi the sum of means in π. The pure-exploration for combinatorial bandit problem asks, given a hypothesis class Π ⊆ 2[n] identify π∗ = argmax π∈Π µπ by requesting as few labels as possible. The combinatorial bandit extends many problems considered in the multi-armed bandit literature. For example setting Π = {{i} : i ∈ [n]} is equivalent to the best-arm identification problem. The discussion at the start of Section 3 shows that the classification problem can be mapped to combinatorial bandits - indeed minimizing the 0/1 loss is equivalent to maximizing µπ. In fact, Algorithm 1 gives state of the art results for the pure exploration combinatorial bandit problem and furthermore Theorem 1 holds verbatim. Algorithm 1 is similar to previous action elimination algorithms for combinatorial bandits in the literature, e.g. Algorithm 4 in [11]. However, unlike previous algorithms, we do not insist on sampling each item once, an unrealistic requirement for classification settings - indeed, not having this constraint allows us to reach minimax rates for classification in one dimensions as discussed above. In addition, this resolves a concern brought up in [11] for elimination being used for PAC-learning. We prove Theorem 1 in this more general setting in the supplementary materials, see A.3. The connection between FDR control and combinatorial bandits is more direct: we are seeking to find π ∈ Π with maximum ηπ subject to FDR-constraints. This already highlights a key difference Input: Confidence bounds C1(π, t, δ), C2(π, π′, t, δ) Ak ⊂ Π will be the set of active sets in round k. Ck ⊂ Π is the set of FDR-controlled policies in round k. A1 = Π, C1 = ∅, S1 = ∪π∈Ππ, T1 = ⋃ π∈Π π − ⋂ π∈Π π, k = 1. for t = 1, 2, · · · if t = 2k: Let δk = .25δ/k2 For each set π ∈ Ak, and each pair π′, π ∈ Ak update the estimates: F̂DR(π) := 1− n|π|t ∑t s=1 YIs,s1{Is ∈ π} T̂P (π′)− T̂P (π) := n t (∑t s=1 Y ′ Js,s1{Js ∈ π ′\π} − ∑t s=1 Y ′ Js,s1{Js ∈ π\π ′} ) Set Ck+1 = Ck ∪ {π ∈ Ak \ Ck : F̂DR(π) + C1(π, t, δk)/|π| ≤ α} Set Ak+1 = Ak Remove any π from Ak+1 and Ck+1 such that one of the conditions is true: 1. F̂DR(π)− C1(π, t, δk)/|π| > α 2. ∃π′ ∈ Ck+1 with T̂P (π′)− T̂P (π) > C2(π, π′, t, δk) and add π to a set R Remove any π from Ak+1 and Ck+1 such that: 3. ∃π′ ∈ Ck+1 ∪R, such that π ⊂ π′. Set Sk+1 := ⋃ π∈Ak+1\Ck+1 π, and Tk+1 = ⋃ π∈Ak+1 π − ⋂ π∈Ak+1 π. k ← k + 1 endif Stochastic Noise: if |Ak| = 1, Break. Otherwise: Sample It ∼ Unif([n]). If It ∈ Sk, then receive a label YIt,t ∼ Ber(ηIt). Sample Jt ∼ Unif([n]). If Jt ∈ Tk, then receive a label Y ′Jt,t ∼ Ber(ηJt). Persistent Noise: If |Ak| = 1 or t > n, Break. Otherwise: Sample It ∼ [n]\{Is : 1 ≤ s < t}. If It ∈ Sk, then receive a label YIt,t = ηIt . Sample Jt ∼ [n]\{Js : 1 ≤ s < t}. If Jt ∈ Tk, then receive a label Y ′Jt,t = ηJt . Return maxt∈Ck+1 T̂P (π) Algorithm 2: Active FDR control in persistent and bounded noise settings. between classification and FDR-control. In one we choose to sample to maximize ηπ subject to FDR constraints where each ηi ∈ [0, 1], whereas in classification we are trying to maximize µπ where each µi ∈ [−1, 1]. A major consequence of this difference is that ηπ ≤ ηπ′ whenever π ⊆ π′, but such a condition does not hold for µπ, µπ′ . Motivating the sample complexity: As mentioned above, the general combinatorial bandit problem is considered in [11]. There they present an algorithm with sample complexity, C n∑ i=1 max π:i∈π∆π∗ 1 |π∆π∗| 1 ∆̃2π log ( max(|B(|π∆π∗|, π)|, |B(|π∆π∗|, π∗)|)n δ ) This complexity parameter is difficult to interpret directly so we compare it to one more familiar in statistical learning - the VC dimension. To see how this sample complexity relates to ours in Theorem 1, note that log2 |B(k, π∗)| ≤ log2 ( n k ) . k log2(n). Thus by the Sauer-Shelah lemma, V (B(r, π∗)) . log2(|B(r, π∗)|) . min{V (B(r, π∗)), r} log2(n) where . hides a constant. The proof of the confidence intervals in the supplementary effectively combines these two facts along with a union bound over all sets in B(r, π∗). 4 Combinatorial FDR Control Algorithm 2 provides an active sampling method for determining π ∈ Π with FDR(π) ≤ α and maximal TPR, which we denote as π∗α. Since TPR(π) = TP (π)/η[n], we can ignore the denominator and so maximizing the TPR is the same as maximizing TP . The algorithm proceeds in epochs. At all times a collection Ak ⊆ Π of active sets is maintained along with a collection of FDRcontrolled sets Ck ⊆ Ak. In each time step, random indexes It and Jt are sampled from the union Sk = ∪π∈Ak\Ckπ and the symmetric difference Tk = ∪π∈Akπ − ∩π∈Akπ respectively. Associated random labels YIt,t, YJt,t ∈ {0, 1} are then obtained from the underlying label distributions Ber(ηIt) and Ber(ηJt). At the start of each epoch, any set with a FDR that is statistically known to be under α is added to Ck, and any sets whose FDR are greater than α are removed from Ak in condition 1. Similar to the active classification algorithm of Figure 1, a set π ∈ Ak is removed in condition 2 if TP (π) is shown to be statistically less than TP (π′) for some π′ ∈ Ck that, crucially, is FDR controlled. In general there may be many sets π ∈ Π such that TP (π) > TP (π∗α) that are not FDR-controlled. Finally in condition 3, we exploit the positivity of the ηi’s: if π ⊂ π′ then deterministically TP (π) ≤ TP (π′), so if π′ is FDR controlled it can be used to eliminate π. The choice of Tk is motivated by active classification: we only need to sample in the symmetric difference. To determine which sets are FDR-controlled it is important that we sample in the entirety of the union of all π ∈ Ak \ Ck, not just the symmetric difference of the Ak, which motivates the choice of Sk. In practical experiments persistent noise is not uncommon and avoids the potential for unbounded sample complexities that potentially occur when FDR(π) ≈ α. Figure 2 demonstrates a model run of the algorithm in the case of five sets Π = {π1, . . . , π5}. Recall that Πα is the subset of Π that is FDR-controlled so that π∗α = arg maxπ∈Πα TP (π). The following gives a sample complexity result for the number of rounds before the algorithm terminates. Theorem 2 Assume that for each i ≤ n there is an associated ηi ∈ [0, 1] and {Yi,j}∞j=1 is an i.i.d. sequence of random variables such that Yi,j ∼ Ber(ηi). For any π ∈ Π define ∆π,α = |FDR(π)−α|, and ∆̃π = |TP (π∗α)− TP (π)|/|π∆π∗| = |TP (π∗α \ π)− TP (π \ π∗α)|/|π∆π∗|, and sFDRπ = Vπ |π| 1 ∆2π,α log ( n log(∆−2π,α)/δ ) , sTPπ = Vπ,π∗α |π∆π∗α| 1 ∆̃2π log ( n log(∆̃−2π )/δ ) In addition define TFDRπ = min{sFDRπ , max{sTPπ , sFDRπ∗α }, minπ′∈Πα π⊂π′ sFDRπ′ } and TTPπ = min{max{sTPπ , sFDRπ∗α }, minπ′∈Πα π⊂π′ sFDRπ′ }. Using C1(π, t, δ) := √ 4|π|nVπ log(nδ ) t + 4nVπ log(nδ ) 3t and C2 = C for C defined in Theorem 1, for a fixed constant c, with probability at least 1− δ, in the stochastic noise setting Algorithm 2 returns π∗α after a number of samples no more than c n∑ i=1 max π∈Π:i∈π TFDRπ︸ ︷︷ ︸ FDR−Control +c n∑ i=1 max π∈Πα:i∈π∆π∗α TTPπ︸ ︷︷ ︸ TPR−Elimination and in the persistent noise setting returns π∗α after no more than c ∑n i=1 min { 1, ( maxπ∈Π:i∈π T FDR π + maxπ∈Πα:i∈π∆π∗α T TP π )} Though this result is complicated, each term is understood by considering each way a set can be removed and the time at which an arm i will stop being sampled. Effectively the sample complexity decomposes into two parts, the complexity of showing that a set is FDR-controlled or not, and how long it takes to eliminate it based on TPR. To motivate sFDRπ , if we have a single set π then 1/(|π|∆2π,α) roughly captures the number of times we have to sample each element in π to decide whether it is FDR-controlled or not - so in particular in the general case we have to roughly sample an arm i, maxπ∈Π,i∈π sπ times. However, we can remove a set before showing it is FDR controlled using other conditions which TFDRπ captures. The term in the sample complexity for elimination using TPR is similarly motivated. We now unpack the underbraced terms more carefully simultaneously explaining the sample complexity and the motivation for the proof of Theorem 2. Sample Complexity of FDR-Control In any round where there exists a set π ∈ Ak \ Ck with arm i ∈ π, i.e. π is not yet FDR controlled, there is the potential for sampling i ∈ Sk. A set π only leaves Ak if i) it is shown to not be FDR controlled (condition 1 of the algorithm), ii) because an FDR controlled set eliminates it on the basis of TP (condition 2), or iii) it is contained in an FDR controlled set (condition 3). These three cases reflect the three arguments of the min in the defined quantity TFDRπ , respectively. Taking the maximum over all sets containing an arm i and summing over all i gives the total FDR-control term. This is a large savings relative to naive non-adaptive algorithms that sample until every set π in Π was FDR controlled which would take O(nmaxπ∈Π sFDRπ ) samples. Sample Complexity of TPR-Elimination An FDR-controlled set π ∈ Πα is only removed from Ck when eliminated by an FDR-controlled set with higher TP or if it is removed because it is contained in an FDR-controlled set. In general we can upper bound the former time by the samples needed for π∗α to eliminate π once we know π ∗ α is FDR controlled - this gives rise to maxπ∈Πα:i∈π∆π∗α T TP π . Note that sets are removed in a procedure mimicking active classification and so the active gains there apply to this setting as well. A naive passive algorithm that continues to sample until both the FDR of every set is determined, and π∗α has higher TP than every other FDR-controlled set gives a significantly worse sample complexity of O(nmax{maxπ∈Πα sFDRπ ,maxπ 6∈Πα sTPπ }). Comparison with [5]. Similar to our proposed algorithm, [5] samples in the union of all active sets and maintains statistics on the empirical FDR of each set, along the way removing sets that are not FDR-controlled or have lower TPR than an FDR-controlled set. However, they fail to sample in the symmetric difference, missing an important link between FDR-control and active classification. In particular, the confidence intervals they use are far looser as a result. They also only consider the case of persistent noise. Their proven sample complexity results are no better than those achieved by the passive algorithm that samples each item uniformly, which is precisely the sample complexity described at the end of the previous paragraph. One Dimensional Thresholds Consider a stylized modeling of the topology βαββ from the introduction in the persistent noise setting where Π = {[t] : t ≤ n}, ηi ∼ Ber(β1{i ≤ z}) with β < .5, and z ∈ [n] is assumed to be small, i.e., we assume that there is only a small region in which positive labels can be found and the Bayes classifier is just to predict 0 for all points. Assuming α > 1− β, one can show the sample complexity of Algorithm 2 satisfiesO((1−α)−2(log(n/(1−α))+(1+β)z/(1−α))) while any naive non-adaptive sampling strategy will take at least O(n) samples. Implementation. For simple classes Π such as thresholds or axis aligned rectangles, our algorithm can be made computationally efficient. But for more complex classes there may be a wide gap between theory and practice, just as in classification [36, 20]. However, the algorithm motivates two key ideas - sample in the union of potentially good sets to learn which are FDR controlled, and sample in the symmetric difference to eliminate sets. The latter insight was originally made by A2 in the case of classification and has justified heuristics such as uncertainty sampling [36]. Developing analogous heuristics for the former case of FDR-control is an exciting avenue of future work.
1. What is the originality of the paper's proposed solution? 2. How well-organized is the paper, and are there any suggestions for improvement? 3. Are there any questions regarding the paper's technical aspects, such as the algorithms and proof? 4. How significant is the paper's contribution, and how does it compare to existing methods? 5. Are there any areas where the paper could improve, such as providing more explanation or addressing potential concerns?
Review
Review Originality: The problem considered in this paper has not been extensively studied yet. The proposed solution is based on a nice combination of techniques from active learning and combinatorial bandits. Quality: I didn't check proofs in appendix, but results look reasonable to me. Clarity: This paper is well-organized. However, its technical part is a little bit dense and more explanation might be helpful. Below are some detailed comments: 1. It is very nice to motivate the problem with an application in the introduction. However, the example given is a little confusing to me. For example, Figure 1 is not well explained (what's the difference between babb and aaa? what does the distribution of bueired NPSA mean? What do you mean by "the distribution of a feature that is highly correlated with the fitted logistic model"?). These details might be irrelevant, but can be distracting for readers. Another issue is that when I was reading this part for the first time, at some point I thought the main issue was sampling bias or class imbalance, but actually this is not the point the authors want to make. 2. It might be easier to read if the authors could explain some high level intuitions for Algorithms 1 and 2 before explaining details. 3. Many definitions are not well explained (for example, V_\pi, V_{\pi, \pi'} in line 178-179, \tau_\pi in line 184-185, ...). Explaining these might shed some lights on how the algorithm/proof works and how much improvement had been made. 4. There seem to be some contributions this paper claims but are not very well explained in the main body. For example, the log(n) factors mentioned in line 121, local VC dimensions in line 180, the improvement over disagreement coefficient in section 3.1. Significance: This problem is well-motivated, and results are better than standard baselines/existing methods. One downside though is that the techniques seem a straightforward extension of existing ones. == After rebuttal: The authors have clarified some of my concerns. I hope the authors could improve the clarity of the technical part, and be more specific about the main challenges and the significance of its contribution in the future version.
NIPS
Title A New Perspective on Pool-Based Active Classification and False-Discovery Control Abstract In many scientific settings there is a need for adaptive experimental design to guide the process of identifying regions of the search space that contain as many true positives as possible subject to a low rate of false discoveries (i.e. false alarms). Such regions of the search space could differ drastically from a predicted set that minimizes 0/1 error and accurate identification could require very different sampling strategies. Like active learning for binary classification, this experimental design cannot be optimally chosen a priori, but rather the data must be taken sequentially and adaptively. However, unlike classification with 0/1 error, collecting data adaptively to find a set with high true positive rate and low false discovery rate (FDR) is not as well understood. In this paper we provide the first provably sample efficient adaptive algorithm for this problem. Along the way we highlight connections between classification, combinatorial bandits, and FDR control making contributions to each. 1 Introduction As machine learning has become ubiquitous in the biological, chemical, and material sciences, it has become irresistible to use these techniques not only for making inferences about previously collected data, but also for guiding the data collection process, closing the loop on inference and data collection [10, 38, 41, 39, 33, 31]. However, though collecting data randomly or non-adaptively can be inefficient, ill-informed ways of collecting data adaptively can be catastrophic: a procedure could collect some data, adopt an incorrect belief, collect more data based on this belief, and leave the practitioner with insufficient data in the right places to infer anything with confidence. In a recent high-throughput protein synthesis experiment [33], thousands of short amino acid sequences (length less than 60) were evaluated with the goal of identifying and characterizing a subset of the pool of all possible sequences ( ≈ 1080) containing many sequences that will fold into stable proteins. That is, given an evaluation budget that is just a minuscule proportion of the total number of sequences, the researchers sought to make predictions about individual sequences that would never be evaluated. An initial first round of sequences uniformly sampled from a predefined subset were synthesized to observe whether each sequence was in the set of sequences that will fold,H1, or inH0 = Hc1. Treating this as a classification problem, a linear logistic regression classifier was trained, using these labels and physics based features. Then a set of sequences to test in the next round were chosen to maximize the probability of folding according to this empirical model - a procedure repeated twice more. This strategy suffers two flaws. First, selecting a set to maximize the likelihood of hits given past rounds’ data is effectively using logistic regression to perform optimization similar to follow-the-leader strategies [14]. While more of the sequences evaluated may fold, these observations may provide little information about whether sequences that were not evaluated will fold or not. Second, while it is natural to employ logistic regression or the SVM 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. to discriminate between binary outcomes (e.g., fold/not-fold), in many scientific applications the property of interest is incredibly rare and an optimal classifier will just predict a single class e.g. not fold. This is not only an undesirable inference for prediction, but a useless signal for collecting data to identify those regions with higher, but still unlikely, probabilities of folding. Consider the data of [33] reproduced in Figure 1, where the proportion of sequences that fold along with their distributions for a particularly informative feature (Buried NPSA) are shown in each round for two different protein topologies (notated βαββ and ααα). In the last column of Figure 1, even though most of the sequences evaluated are likely to fold, we are sampling in a small part of the overall search space. This limits our overall ability to identify under-explored regions that could potentially contain many sequences that fold, even though the logistic model does not achieve its maximum there. On the other hand, in the top plot of Figure 1, sequences with topology βαββ (shown in blue) so rarely folded that a near-optimal classifier would predict “not fold” for every sequence. Instead of using a procedure that seeks to maximize the probability of folding or classifying sequences as fold or not-fold, a more natural objective is to predict a set of sequences π in such a way as to maximize the true positive rate (TPR) |H1 ∩π|/|H1| while minimizing the false discovery rate (FDR) i.e. |H0 ∩ π|/|π|. That is, π is chosen to contain a large number of sequences that fold while the proportion of false-alarms among those predicted is relatively small. For example, if a set π for βαββ was found that maximized TPR subject to FDR being less than 9/10 then π would be non-empty with the guarantee that at least one in every 10 suggestions was a true-positive; not ideal, but making the best of a bad situation. In some settings, such as for topology ααα (shown in orange), training a classifier to minimize 0/1 loss may be reasonable. Of course, before seeing any data we would not know whether classification is a good objective so it is far more conservative to optimize for maximizing the number of discoveries. Contributions. We propose the first provably sample-efficient adaptive sampling algorithm for maximizing TPR subject to an FDR constraint. This problem has deep connections to active binary classification (e.g., active learning) and pure-exploration for combinatorial bandits that are necessary steps towards motivating our algorithm. We make the following contributions: 1. We improve upon state of the art sample complexity for pool-based active classification in the agnostic setting providing novel sample complexity bounds that do not depend on the disagreementcoefficient for sampling with or without replacement. Our bounds are more granular than previous results as they describe the contribution of a single example to the overall sample complexity. 2. We highlight an important connection between active classification and combinatorial bandits. Our results follow directly from our improvements to the state of the art in combinatorial bandits, extending methods to be near-optimal for classes that go beyond matroids where one need not sample every arm at least once. 3. Our main contribution is the development and analysis of an adaptive sampling algorithm that minimizes the number of samples to identify the set that maximizes the true positive rate subject to a false discovery constraint. To the best of our knowledge, this is the first work to demonstrate a sample complexity for this problem that is provably better than non-adaptive sampling. 1.1 Pool Based Classification and FDR Control Here we describe what is known as the pool-based setting for active learning with stochastic labels. Throughout the following we assume access to a finite set of items [n] = {1, · · · , n} with an associated label space {0, 1}. The items can be fixed vectors {xi}ni=1 ∈ Rd but we do not restrict to this case. Associated to each i ∈ [n] there is a Bernoulli distribution Ber(ηi) with ηi ∈ [0, 1]. We imagine a setting where in each round a player chooses It ∈ [n] and observes an independent random variable YIt,t. For any i, Yi,t ∼ Ber(ηi) are i.i.d. Borrowing from the multi-armed bandit literature, we may also refer to the items as arms, and pulling an arm is receiving a sample from its corresponding label distribution. We will refer to this level of generality as the stochastic noise setting. The case when ηi ∈ {0, 1}, i.e. each point i ∈ [n] has a deterministic label Yi,j = ηi for all j ≥ 1, will be referred to as the persistent noise setting. In this setting we can define H1 = {i : ηi = 1},H0 = [n] \ H1. This is a natural setting if the experimental noise is negligible so that performing the same measurement multiple times gives the same result. A classifier is a decision rule f : [n]→ {0, 1} that assigns each item i ∈ [n] a fixed label. We can identify any such decision rule with the set of items it maps to 1, i.e. the set π = {i : i ∈ [n], f(i) = 1}. Instead of considering all possible sets π ⊂ [n], we will restrict ourselves to a smaller class Π ⊂ 2[n]. With this interpretation, one can imagine Π being a combinatorial class, such as the collection of all subsets of [n] of size k, or if we have features, Π could be the sets induced by the set of all linear separators over {xi}. The classification error, or risk of a classifier is given by the expected number of incorrect labels, i.e. R(π) = Pi∼Unif([n]),Yi∼Ber(ηi) (π(i) 6= Yi) = 1 n ( ∑ i 6∈π ηi + ∑ i∈π (1− ηi)) for any π ∈ Π. In the case of persistent noise the above reduces toR(π) = |π∩H0|+|π c∩H1| n = |H1∆π| n where A∆B = (A ∪B)− (A ∩B) for any sets A,B. Problem 1:(Classification) Given a hypothesis class Π ⊆ 2[n] identify π∗ := argmin π∈Π R(π) by requesting as few labels as possible. As described in the introduction, in many situations we are not interested in finding the lowest risk classifier, but instead returning π ∈ Π that contains many discoveries π ∩H1 without too many false alarms π ∩H0. Define ηπ := ∑ i∈π ηx. The false discovery rate (FDR) and true positive rate (TPR) of a set π in the stochastic noise setting are given by FDR(π) := 1− ηπ |π| and TPR(π) := ηπ η[n] In the case of persistent noise, FDR(π) = |H0∩π||π| = 1 − |H1∩π| |π| and TPR(π) = |H1∩π| |H1| . A convenient quantity that we can use to reparametrize these quantities is the true positives: TP (π) :=∑ i∈π ηi. Throughout the following we let Πα = {π ∈ Π : FDR(π) ≤ α}. Problem 2:(Combinatorial FDR Control) Given an α ∈ (0, 1) and hypothesis class Π ⊆ 2[n] identify π∗α = argmax π∈Π,FDR(π)≤α TPR(π) by requesting as few labels as possible. In this work we are agnostic about how η relates to Π, ala [2, 20]. For instance we do not assume the Bayes classifier, argminB∈{0,1}nR(B) is contained in Π. 2 Related Work Active Classification. Active learning for binary classification is a mature field (see surveys [36, 25] and references therein). The major theoretical results of the field can coarsely be partitioned into the streaming setting [2, 6, 20, 26] and the pool-based setting [19, 24, 32], noting that algorithms for the former can be used for the latter, [2], an inspiration for our algorithm, is such an example. These results rely on different complexity measures known as the splitting index, the teaching dimension, and (arguably the most popular) the disagreement coefficient. Computational Considerations. While there have been remarkable efforts to make some of these methods more computationally efficient [6, 26], we believe even given infinite computation, many of these previous works are fundamentally inefficient from a sample complexity perspective. This stems from the fact that when applied to common combinatorial classes (for example the collection of all subsets of size k), these algorithms have sample complexities that are off by at least log(n) factors from the best algorithms for these classes. Consequently, in our work we focus on sample complexity alone, and leave matters of computational efficiency for future work. Other Measures. Given a static dataset, the problem of finding a set or classifier that maximizes TPR subject to FDR-control in the information retrieval community is also known as finding a binary classifier that maximizes recall for a given precision level. There is extensive work on the non-adaptive sample complexity of computing measures related to precision and recall such as AUC, and F-scores [35, 9, 1]. However, there have been just a few works that consider adaptively collecting data with the goal of maximizing recall with precision constraints [34, 5], with the latter work being the most related. We will discuss it further after the statement of our main result. In [34], the problem of adaptively estimating the whole ROC curve for a threshold class is considered under a monotonicity assumption on the true positives; our algorithm is agnostic to this assumption. Combinatorial Bandits: The pure-exploration combinatorial bandit game has been studied for the case of all subsets of [n] of size k known as the Top-K problem [22, 29, 30, 28, 37, 17], the bases of a rank-k matroid (for which Top-K is a particular instance) [18, 23, 15], and in the general case [11, 16]. The combinatorial bandit component of our work (see Section 3.2) is closest to [11]. The algorithm of [11] uses a disagreement-based algorithm in the spirit of Successive Elimination for bandits [22], or the A2 for binary classification [2]. Exploring precisely what log factors are necessary has been an active area. [16] demonstrates a family of instances in which they show in the worst-case, the sample complexity must scale with log(|Π|). However, there are many classes like best-arm identification and matroids where sample complexity does not scale with log(|Π|) (see references above). Our own work provides some insight into what log factors are necessary by presenting our results in terms of VC dimension. In addition, we discuss situtations when a log(n) could potentially be avoided by appealing to Sauer’s lemma in the supplementary material. Multiple Hypothesis Testing. Finally, though this work shares language with the adaptive multiplehypothesis testing literature [12, 27, 42, 40], the goals are different. In that setting, there is a set of n hypothesis tests, where the null is that the mean of each distribution is zero and the alternative is that it is nonzero. [27] designs a procedure that adaptively allocates samples and uses the BenjaminiHochberg procedure [4] on p-values to return an FDR-controlled set. We are not generally interested in finding which individual arms have means that are above a fixed threshold, but instead, given a hypothesis class we want to return an FDR controlled set in the hypothesis class with high TPR. This is the situation in many structured problems in scientific discovery where the set of arms corresponds to an extremely large set of experiments and we have feature vector associated with each arm. We can’t run each one but we may have some hope of identifying a region of the search space which contains many discoveries. In summary, unlike the setting of [27], Π encodes structure among the sets, we do not insist each item is sampled, and we are allowing for persistent labels - overall we are solving a different and novel problem. 3 Pool Based Active Classification We first establish a pool based active classification algorithm that motivates our development of an adaptive algorithm for FDR-control. For each i define µi := 2ηi − 1 ∈ [−1, 1] so ηi = 1+µi2 . By a simple manipulation of the definition of R(π) above we have R(π) = 1 n n∑ i=1 ηi + 1 n ∑ i∈π (2ηi − 1) = 1 n n∑ i=1 ηi − 1 n ∑ i∈π µi so that argmin π∈Π R(π) = argmax π∈Π ∑ i∈π µi. Define µπ := ∑ i∈π µi. If for some i ∈ [n] we map the jth draw of its label Yi,j 7→ 2Yi,j − 1, then E[2Yi,j − 1] = µi and returning an optimal classifier in the set is equivalent to returning π ∈ Π with the largest µπ . Algorithm 1 exploits this. The algorithm maintains a collection of active setsAk ⊆ Π and an active set of items Tk ⊆ [n] which is the symmetric difference of all sets in Ak. To see why we only sample in Tk, if i ∈ ∩π∈Akπ then π and π′ agree on the label of item i, and any contribution of arm i is canceled in each difference µ̂π − µ̂π′ = µ̂π\π′ − µ̂π′\π for all π, π′ ∈ Ak so we should not pay to sample it. In each round sets π with lower empirical means that fall outside of the confidence interval of sets with higher empirical means are removed. There may be some concern that samples from previous rounds are reused. The estimator µ̂π′,k − µ̂π,k = nt ∑t s=1RIt,s(1(Is ∈ π′ \ π)− 1(Is ∈ π \ π′)) depends on all t samples up to the t-th round, each of which is uniformly and independently drawn at each step. Thus each summand is an unbiased estimate of µπ′ − µπ. However, for π, π′ active in round k, as explained above, a summand is only non-zero if Is ∈ π∆π′ ⊂ Tk hence we only need to observe RIt,s if It ∈ Tk so the estimate of µ̂π′,k − µ̂π,k is unbiased. In practice, since the number of samples that land in Tk follow a binomial distribution, instead of using rejection sampling we could instead have drawn a single sample from a binomial distribution and sampled that many uniformly at random from Tk. Input: δ, Π ⊂ 2[n], Confidence bound C(π′, π, t, δ). Let A1 = Π, T1 = (∪π∈A1π)− (∩π∈A1π), k = 1, Ak will be the active sets in round k for t = 1, 2, · · · if t == 2k: Set δk = .5δ/k2. For each π, π′ let µ̂π′,k − µ̂π,k = nt ( ∑t s=1 RIs,s1{Is ∈ π ′ \ π} − ∑t s=1 RIs,s1{Is ∈ π \ π ′}) Set Ak+1 = Ak − { π ∈ Ak : ∃π′ ∈ Akwith µ̂π′,k − µ̂π,k > C(π′, π, t, δk) } . Set Tk+1 = ( ∪π∈Ak+1π ) − ( ∩π∈Ak+1π ) . k ← k + 1 endif Stochastic Noise: If Tk = ∅, Break. Otherwise, draw It uniformly at random from [n] and if It ∈ Tk receive an associated reward RIt,t = 2YIt,t − 1, YIt,t iid∼ Ber(ηIt). Persistent Noise: If Tk = ∅ or t > n, Break. Otherwise, draw It uniformly at random from [n] \ {Is : 1 ≤ s < t} and if It ∈ Tk receive associated reward RIt,t = 2YIt,t − 1, YIt,t = ηIt . Output: π′ ∈ Ak such that µ̂π′,k − µ̂π,k ≥ 0 for all π ∈ Ak \ π′ Algorithm 1: Action Elimination for Active Classification For any A ⊆ 2[n] define V (A) as the VC-dimension of a collection of sets A. Given a family of sets, Π ⊆ 2[n], define B1(k) := {π ∈ Π : |π| = k}, B2(k, π′) := {π ∈ Π : |π∆π′| = k}. Also define the following complexity measures: Vπ := V (B1(|π|)) ∧ |π| and Vπ,π′ := max{V (B2(|π∆π′|, π), V (B2(|π∆π′|, π′))} ∧ |π∆π′| In general Vπ, Vπ,π′ ≤ V (Π). A contribution of our work is the development of confidence intervals that do not depend on a union bound over the class but instead on local VC dimensions. These are described carefully in Lemma 1 in the supplementary materials. Theorem 1 For each i ∈ [n] let µi ∈ [−1, 1] be fixed but unknown and assume {Ri,j}∞j=1 is an i.i.d sequence of random variables such that E[Ri,j ] = µi and Ri,j ∈ [−1, 1]. Define ∆̃π = |µπ − µπ∗ |/|π∆π∗|, and τπ = Vπ,π∗ |π∗∆π| 1 ∆̃2π log ( n log(∆̃−2π )/δ ) . UsingC(π, π′, t, δ) := √ 8|π∆π′|nVπ,π′ log( n δ ) t + 4nVπ,π′ log( n δ ) 3t for a fixed constant c, with probability greater than 1− δ, in the stochastic noise setting Algorithm 1 returns π∗ after a number of samples no more than c ∑n i=1 maxπ∈Π:i∈π∆π∗ τπ and in the persistent noise setting the number of samples needed is no more than c ∑n i=1 min{1,maxπ∈Π:i∈π∆π∗ τπ} Heuristically, the expression 1/|π∆π∗|∆̃2π roughly captures the number of times we would have to sample each i ∈ π∆π∗ to ensure that we can show µπ∗ > µπ. Thus in the more general case, we may expect that we can stop pulling a specific i once each set π such that i ∈ π∆π∗ is removed - accounting for the expression maxπ∈Π,i∈π∆π∗ τπ. The VC-dimension and the logarithmic term in τπ is discussed further below and primarily comes from a careful union bound over the class Π. One always has 1/|π∗∆π| ≤ Vπ,π∗/|π∗∆π| ≤ 1 and both bounds are achievable by different classes Π. In addition, in terms of risk ∆̃π = |µπ−µπ∗ |/|π∆π∗| = n|R(π)−R(π∗)|/|π∆π∗|. Since sampling is done without replacement for persistent noise, there are improved confidence intervals that one can use in that setting described in Lemma 1 in the supplementary materials. Finally, if we had sampled non-adaptively, i.e. without rejection sampling, we would have had a sample complexity of O(nmaxi∈[n] maxπ:Π:i∈π∆π∗ τπ). 3.1 Comparison with previous Active Classification results. One Dimensional Thresholds: In the bound of Theorem 1, a natural question to ask is whether the log(n) dependence can be improved. In the case of nested classes, such as thresholds on a line, we can replace the log(n) with a log log(n) using empirical process theory. This leads to confidence intervals dependent on log log(n) that can be used in place of C(π′, π, t, δ) in Algorithm 1 (see sections C for the confidence intervals and 3.2 for a longer discussion). Under specific noise models we can give a more interpretable sample complexity. Let h ∈ (0, 1], α ≥ 0, z ∈ [0, 1] for some i ∈ [n − 1] and assume that ηi = 12 + sign(z−i/n) 2 h|z − i/n| α so that µi = h|z − i/n|αsign(z − i/n) (this would be a reasonable noise model for topology ααα in the introduction). Let Π = {[k] : k ≤ n}. In this case, inspecting the dominating term of Theorem 1 for i ∈ π∗ we have arg maxπ∈Π:i∈π∆π∗ Vπ,π∗ |π∆π∗| 1 ∆̃2π = [i] and takes a value of ( 1+α h )2 n−1(z − i/n)−2α−1. Upper bounding the other terms and summing, the sample complexities can be calculated to be O(log(n) log(log(n)/δ)/h2) if α = 0, and O(n2α log(log(n)/δ)/h2) if α > 0. These rates match the minimax lower bound rates given in [13] up to log log factors. Unlike the algorithms given there, our algorithm works in the agnostic setting, i.e. it is making no assumptions about whether the Bayes classifier is in the class. In the case of non-adaptive sampling, the sum is replaced with the max times n yielding n2α+1 log(log(n)/δ)/h2 which is substantially worse than adaptive sampling. Comparison to previous algorithms: One of the foundational works on active learning is the DHM algorithm of [20] and the A2 algorithm that preceded it [2]. Similar in spirit to our algorithm, DHM requests a label only when it is uncertain how π∗ would label the current point. In general the analysis of the DHM algorithm can not characterize the contribution of each arm to the overall sample complexity leading to sub-optimal sample complexity for combinatorial classes. For example in the the case when Π = {[i]}ni=1, with i∗ = arg maxi∈[n] µi, ignoring logarithmic factors, one can show for this problem the bound of Theorem 1 of [20] scales like n2 maxi 6=i∗(µi∗ − µ−2i ) which is substantially worse than our bound for this problem which scales like ∑ i 6=i∗ ∆ −2 i . Similar arguments can be made for other combinatorial classes such as all subsets of size k. While we are not particularly interested in applying algorithms like DHM to this specific problem, we note that the style of its analysis exposes such a gross inconsistency with past analyses of the best known algorithms that the approach leaves much to be desired. For more details, please see A.2 in the supplementary materials. 3.2 Connections to Combinatorial Bandits A closely related problem to classification is the pure-exploration combinatorial bandit problem. As above we have access to a set of arms [n], and associated to each arm is an unknown distribution νi with support in [−1, 1] - which is arbitrary not just a Bernoulli label distribution. We let {Ri,j}∞j=1 be a sequence of random variables where Ri,j ∼ νi is the jth (i.i.d.) draw from νi satisfying E[Ri,j ] = µi ∈ [−1, 1]. In the persistent noise setting we assume that νi is a point mass at µi ∈ [−1, 1]. Given a collection of sets Π ⊆ 2[n], for each π ∈ Π we define µπ := ∑ i∈π µi the sum of means in π. The pure-exploration for combinatorial bandit problem asks, given a hypothesis class Π ⊆ 2[n] identify π∗ = argmax π∈Π µπ by requesting as few labels as possible. The combinatorial bandit extends many problems considered in the multi-armed bandit literature. For example setting Π = {{i} : i ∈ [n]} is equivalent to the best-arm identification problem. The discussion at the start of Section 3 shows that the classification problem can be mapped to combinatorial bandits - indeed minimizing the 0/1 loss is equivalent to maximizing µπ. In fact, Algorithm 1 gives state of the art results for the pure exploration combinatorial bandit problem and furthermore Theorem 1 holds verbatim. Algorithm 1 is similar to previous action elimination algorithms for combinatorial bandits in the literature, e.g. Algorithm 4 in [11]. However, unlike previous algorithms, we do not insist on sampling each item once, an unrealistic requirement for classification settings - indeed, not having this constraint allows us to reach minimax rates for classification in one dimensions as discussed above. In addition, this resolves a concern brought up in [11] for elimination being used for PAC-learning. We prove Theorem 1 in this more general setting in the supplementary materials, see A.3. The connection between FDR control and combinatorial bandits is more direct: we are seeking to find π ∈ Π with maximum ηπ subject to FDR-constraints. This already highlights a key difference Input: Confidence bounds C1(π, t, δ), C2(π, π′, t, δ) Ak ⊂ Π will be the set of active sets in round k. Ck ⊂ Π is the set of FDR-controlled policies in round k. A1 = Π, C1 = ∅, S1 = ∪π∈Ππ, T1 = ⋃ π∈Π π − ⋂ π∈Π π, k = 1. for t = 1, 2, · · · if t = 2k: Let δk = .25δ/k2 For each set π ∈ Ak, and each pair π′, π ∈ Ak update the estimates: F̂DR(π) := 1− n|π|t ∑t s=1 YIs,s1{Is ∈ π} T̂P (π′)− T̂P (π) := n t (∑t s=1 Y ′ Js,s1{Js ∈ π ′\π} − ∑t s=1 Y ′ Js,s1{Js ∈ π\π ′} ) Set Ck+1 = Ck ∪ {π ∈ Ak \ Ck : F̂DR(π) + C1(π, t, δk)/|π| ≤ α} Set Ak+1 = Ak Remove any π from Ak+1 and Ck+1 such that one of the conditions is true: 1. F̂DR(π)− C1(π, t, δk)/|π| > α 2. ∃π′ ∈ Ck+1 with T̂P (π′)− T̂P (π) > C2(π, π′, t, δk) and add π to a set R Remove any π from Ak+1 and Ck+1 such that: 3. ∃π′ ∈ Ck+1 ∪R, such that π ⊂ π′. Set Sk+1 := ⋃ π∈Ak+1\Ck+1 π, and Tk+1 = ⋃ π∈Ak+1 π − ⋂ π∈Ak+1 π. k ← k + 1 endif Stochastic Noise: if |Ak| = 1, Break. Otherwise: Sample It ∼ Unif([n]). If It ∈ Sk, then receive a label YIt,t ∼ Ber(ηIt). Sample Jt ∼ Unif([n]). If Jt ∈ Tk, then receive a label Y ′Jt,t ∼ Ber(ηJt). Persistent Noise: If |Ak| = 1 or t > n, Break. Otherwise: Sample It ∼ [n]\{Is : 1 ≤ s < t}. If It ∈ Sk, then receive a label YIt,t = ηIt . Sample Jt ∼ [n]\{Js : 1 ≤ s < t}. If Jt ∈ Tk, then receive a label Y ′Jt,t = ηJt . Return maxt∈Ck+1 T̂P (π) Algorithm 2: Active FDR control in persistent and bounded noise settings. between classification and FDR-control. In one we choose to sample to maximize ηπ subject to FDR constraints where each ηi ∈ [0, 1], whereas in classification we are trying to maximize µπ where each µi ∈ [−1, 1]. A major consequence of this difference is that ηπ ≤ ηπ′ whenever π ⊆ π′, but such a condition does not hold for µπ, µπ′ . Motivating the sample complexity: As mentioned above, the general combinatorial bandit problem is considered in [11]. There they present an algorithm with sample complexity, C n∑ i=1 max π:i∈π∆π∗ 1 |π∆π∗| 1 ∆̃2π log ( max(|B(|π∆π∗|, π)|, |B(|π∆π∗|, π∗)|)n δ ) This complexity parameter is difficult to interpret directly so we compare it to one more familiar in statistical learning - the VC dimension. To see how this sample complexity relates to ours in Theorem 1, note that log2 |B(k, π∗)| ≤ log2 ( n k ) . k log2(n). Thus by the Sauer-Shelah lemma, V (B(r, π∗)) . log2(|B(r, π∗)|) . min{V (B(r, π∗)), r} log2(n) where . hides a constant. The proof of the confidence intervals in the supplementary effectively combines these two facts along with a union bound over all sets in B(r, π∗). 4 Combinatorial FDR Control Algorithm 2 provides an active sampling method for determining π ∈ Π with FDR(π) ≤ α and maximal TPR, which we denote as π∗α. Since TPR(π) = TP (π)/η[n], we can ignore the denominator and so maximizing the TPR is the same as maximizing TP . The algorithm proceeds in epochs. At all times a collection Ak ⊆ Π of active sets is maintained along with a collection of FDRcontrolled sets Ck ⊆ Ak. In each time step, random indexes It and Jt are sampled from the union Sk = ∪π∈Ak\Ckπ and the symmetric difference Tk = ∪π∈Akπ − ∩π∈Akπ respectively. Associated random labels YIt,t, YJt,t ∈ {0, 1} are then obtained from the underlying label distributions Ber(ηIt) and Ber(ηJt). At the start of each epoch, any set with a FDR that is statistically known to be under α is added to Ck, and any sets whose FDR are greater than α are removed from Ak in condition 1. Similar to the active classification algorithm of Figure 1, a set π ∈ Ak is removed in condition 2 if TP (π) is shown to be statistically less than TP (π′) for some π′ ∈ Ck that, crucially, is FDR controlled. In general there may be many sets π ∈ Π such that TP (π) > TP (π∗α) that are not FDR-controlled. Finally in condition 3, we exploit the positivity of the ηi’s: if π ⊂ π′ then deterministically TP (π) ≤ TP (π′), so if π′ is FDR controlled it can be used to eliminate π. The choice of Tk is motivated by active classification: we only need to sample in the symmetric difference. To determine which sets are FDR-controlled it is important that we sample in the entirety of the union of all π ∈ Ak \ Ck, not just the symmetric difference of the Ak, which motivates the choice of Sk. In practical experiments persistent noise is not uncommon and avoids the potential for unbounded sample complexities that potentially occur when FDR(π) ≈ α. Figure 2 demonstrates a model run of the algorithm in the case of five sets Π = {π1, . . . , π5}. Recall that Πα is the subset of Π that is FDR-controlled so that π∗α = arg maxπ∈Πα TP (π). The following gives a sample complexity result for the number of rounds before the algorithm terminates. Theorem 2 Assume that for each i ≤ n there is an associated ηi ∈ [0, 1] and {Yi,j}∞j=1 is an i.i.d. sequence of random variables such that Yi,j ∼ Ber(ηi). For any π ∈ Π define ∆π,α = |FDR(π)−α|, and ∆̃π = |TP (π∗α)− TP (π)|/|π∆π∗| = |TP (π∗α \ π)− TP (π \ π∗α)|/|π∆π∗|, and sFDRπ = Vπ |π| 1 ∆2π,α log ( n log(∆−2π,α)/δ ) , sTPπ = Vπ,π∗α |π∆π∗α| 1 ∆̃2π log ( n log(∆̃−2π )/δ ) In addition define TFDRπ = min{sFDRπ , max{sTPπ , sFDRπ∗α }, minπ′∈Πα π⊂π′ sFDRπ′ } and TTPπ = min{max{sTPπ , sFDRπ∗α }, minπ′∈Πα π⊂π′ sFDRπ′ }. Using C1(π, t, δ) := √ 4|π|nVπ log(nδ ) t + 4nVπ log(nδ ) 3t and C2 = C for C defined in Theorem 1, for a fixed constant c, with probability at least 1− δ, in the stochastic noise setting Algorithm 2 returns π∗α after a number of samples no more than c n∑ i=1 max π∈Π:i∈π TFDRπ︸ ︷︷ ︸ FDR−Control +c n∑ i=1 max π∈Πα:i∈π∆π∗α TTPπ︸ ︷︷ ︸ TPR−Elimination and in the persistent noise setting returns π∗α after no more than c ∑n i=1 min { 1, ( maxπ∈Π:i∈π T FDR π + maxπ∈Πα:i∈π∆π∗α T TP π )} Though this result is complicated, each term is understood by considering each way a set can be removed and the time at which an arm i will stop being sampled. Effectively the sample complexity decomposes into two parts, the complexity of showing that a set is FDR-controlled or not, and how long it takes to eliminate it based on TPR. To motivate sFDRπ , if we have a single set π then 1/(|π|∆2π,α) roughly captures the number of times we have to sample each element in π to decide whether it is FDR-controlled or not - so in particular in the general case we have to roughly sample an arm i, maxπ∈Π,i∈π sπ times. However, we can remove a set before showing it is FDR controlled using other conditions which TFDRπ captures. The term in the sample complexity for elimination using TPR is similarly motivated. We now unpack the underbraced terms more carefully simultaneously explaining the sample complexity and the motivation for the proof of Theorem 2. Sample Complexity of FDR-Control In any round where there exists a set π ∈ Ak \ Ck with arm i ∈ π, i.e. π is not yet FDR controlled, there is the potential for sampling i ∈ Sk. A set π only leaves Ak if i) it is shown to not be FDR controlled (condition 1 of the algorithm), ii) because an FDR controlled set eliminates it on the basis of TP (condition 2), or iii) it is contained in an FDR controlled set (condition 3). These three cases reflect the three arguments of the min in the defined quantity TFDRπ , respectively. Taking the maximum over all sets containing an arm i and summing over all i gives the total FDR-control term. This is a large savings relative to naive non-adaptive algorithms that sample until every set π in Π was FDR controlled which would take O(nmaxπ∈Π sFDRπ ) samples. Sample Complexity of TPR-Elimination An FDR-controlled set π ∈ Πα is only removed from Ck when eliminated by an FDR-controlled set with higher TP or if it is removed because it is contained in an FDR-controlled set. In general we can upper bound the former time by the samples needed for π∗α to eliminate π once we know π ∗ α is FDR controlled - this gives rise to maxπ∈Πα:i∈π∆π∗α T TP π . Note that sets are removed in a procedure mimicking active classification and so the active gains there apply to this setting as well. A naive passive algorithm that continues to sample until both the FDR of every set is determined, and π∗α has higher TP than every other FDR-controlled set gives a significantly worse sample complexity of O(nmax{maxπ∈Πα sFDRπ ,maxπ 6∈Πα sTPπ }). Comparison with [5]. Similar to our proposed algorithm, [5] samples in the union of all active sets and maintains statistics on the empirical FDR of each set, along the way removing sets that are not FDR-controlled or have lower TPR than an FDR-controlled set. However, they fail to sample in the symmetric difference, missing an important link between FDR-control and active classification. In particular, the confidence intervals they use are far looser as a result. They also only consider the case of persistent noise. Their proven sample complexity results are no better than those achieved by the passive algorithm that samples each item uniformly, which is precisely the sample complexity described at the end of the previous paragraph. One Dimensional Thresholds Consider a stylized modeling of the topology βαββ from the introduction in the persistent noise setting where Π = {[t] : t ≤ n}, ηi ∼ Ber(β1{i ≤ z}) with β < .5, and z ∈ [n] is assumed to be small, i.e., we assume that there is only a small region in which positive labels can be found and the Bayes classifier is just to predict 0 for all points. Assuming α > 1− β, one can show the sample complexity of Algorithm 2 satisfiesO((1−α)−2(log(n/(1−α))+(1+β)z/(1−α))) while any naive non-adaptive sampling strategy will take at least O(n) samples. Implementation. For simple classes Π such as thresholds or axis aligned rectangles, our algorithm can be made computationally efficient. But for more complex classes there may be a wide gap between theory and practice, just as in classification [36, 20]. However, the algorithm motivates two key ideas - sample in the union of potentially good sets to learn which are FDR controlled, and sample in the symmetric difference to eliminate sets. The latter insight was originally made by A2 in the case of classification and has justified heuristics such as uncertainty sampling [36]. Developing analogous heuristics for the former case of FDR-control is an exciting avenue of future work.
1. What is the main contribution of the paper regarding the problem setting and general approach? 2. How does the proposed algorithm differ from related work, particularly in sampling in the symmetric difference? 3. Do you have any questions or concerns regarding the paper's writing style and presentation? 4. What are the strengths and weaknesses of the paper's theoretical analysis and results? 5. Are there any typos or minor comments regarding style that could be improved in the paper?
Review
Review First I want to say that I like the problem setting and the general approach. Also I do not have a single main concern, but I have several non-negligible ones that, when piled up, determined my final score. In what follows, I’ll order my concerns/questions from more important ones to minor comments. I felt that the main result, which is given in Theorem 2, was really hard to interpret. There are way too many terms, making their combination in the final result very hard to interpret. I found the discussion following the theorem very helpful, but it helped me understand how these terms occurred in the proof, rather than what their practical meaning is. I am happy seeing such a theorem if it is followed by multiple corollaries, given by different parameter instantiations, showing when the bound is tight and how the obtained complexity compares to prior work. Some comparison with the result of [4] is given, although in the fully general setting; it would be interesting to see settings when this difference is significant, and when it is not. For example, I liked the “One Dimensional Thresholds” example. Perhaps this is a matter of taste, but when I reached the last paragraph, I thought the paper ended too early; I was expecting more such examples to follow. 
Another concern is that there have been multiple works on different bandit approaches to multiple testing (not all of which test for whether the mean of a distribution is zero, as stated in the paper). Some related papers that weren’t discussed include, for example, Optimal Testing in the Experiment-rich Regime by Schmit et al. (AISTATS 2019), A framework for Multi-A(rmed)/B(andit) testing with online FDR control by Yang et al. (NeurIPS 2017), etc. Moreover, these papers focus on sample complexity just like the current paper. Their setting is different, but the papers are similar enough that they deserve a brief discussion. Citing only 3 papers in the “Multiple Hypothesis Testing” paragraph makes me think that the connections to prior work in this area have not been fully explored. If I understand correctly, the proposed algorithm is inspired by algorithms in related work (e.g. [4, 10]), but a fundamentally different idea is sampling in the symmetric difference, which has not been exploited so far? Further, I found the writing very messy in some parts, and it took a lot of re-reading to go over some definitions and arguments. For example, A_k and T_k are introduced without stating what k is. At the end of page 4, there is a sentence that says “the number of samples that land in T_k”, even though T_k is not a set of samples. In the same paragraph, mu-hats are used without being previously defined. The paragraph assumes I_t is uniformly distributed on [n], even though that is not stated beforehand. In the definition of TPR, eta_[n] is used, even though it is never defined. I understand that submissions are thoroughly revised before publication, but these kinds of mistakes make reviewing harder than it needs to be. I don’t know if this is necessary for the proof or not, but I didn’t see why the mu-hat difference is unbiased, as stated at the bottom of page 4. Especially if the formal result hinges on this fact, I would appreciate a formal argument explaining why this is the case. The rejection sampling strategy is essentially equivalent to the following: conditioned on “past information”, sample uniformly from the set T_k. This couples the old and new samples, making it not so obvious to me that the mu-hat difference is unbiased. Related to this point, I didn’t understand the part that says that the number of samples that land in T_k follow a geometric distribution. I agree that the wait time until you observe a selection in T_k is a geometric random variable. Relatively minor comments regarding style: 1. It is incorrect to say that Y_{I_t,t} is iid, as written at the beginning of page 3; iid is an assumption that refers to a set of random variables. This sentence needs to be rephrased more formally. 2. I was confused by the sentence “Instead of considering all possible subsets of [n], we will restrict ourselves to a finite class … .” The set of all subsets of [n] is finite anyway. 3. There is a minus sign missing when introducing mu_i in the risk equation on page 4. Either way, I do not see the purpose of introducing the risk. The paper makes it clear that the goal is to maximize TPR given an FDR constraint, as opposed to minimizing risk. Much of Section 1.1. seems like an unnecessary distraction. 4. There are some typos that need revision, like at the beginning of Section 3.1 where it says “specific specific noise models”, or in the Remark on page 5 there should be R_{i,t}. 5. The Benjamini-Hochberg paper should probably be cited, given that the used error metric is FDR, which stems from that paper. After rebuttal: The authors have clarified some of my concerns, and have promised to improve clarity of presentation. The other reviews have also convinced me about the originality of this submission, so I am increasing my score.
NIPS
Title CapProNet: Deep Feature Learning via Orthogonal Projections onto Capsule Subspaces Abstract In this paper, we formalize the idea behind capsule nets of using a capsule vector rather than a neuron activation to predict the label of samples. To this end, we propose to learn a group of capsule subspaces onto which an input feature vector is projected. Then the lengths of resultant capsules are used to score the probability of belonging to different classes. We train such a Capsule Projection Network (CapProNet) by learning an orthogonal projection matrix for each capsule subspace, and show that each capsule subspace is updated until it contains input feature vectors corresponding to the associated class. We will also show that the capsule projection can be viewed as normalizing the multiple columns of the weight matrix simultaneously to form an orthogonal basis, which makes it more effective in incorporating novel components of input features to update capsule representations. In other words, the capsule projection can be viewed as a multi-dimensional weight normalization in capsule subspaces, where the conventional weight normalization is simply a special case of the capsule projection onto 1D lines. Only a small negligible computing overhead is incurred to train the network in low-dimensional capsule subspaces or through an alternative hyper-power iteration to estimate the normalization matrix. Experiment results on image datasets show the presented model can greatly improve the performance of the state-of-the-art ResNet backbones by 10− 20% and that of the Densenet by 5− 7% respectively at the same level of computing and memory expenses. The CapProNet establishes the competitive state-of-the-art performance for the family of capsule nets by significantly reducing test errors on the benchmark datasets. 1 Introduction Since the idea of capsule net [15, 9] was proposed, many efforts [8, 17, 14, 1] have been made to seek better capsule architectures as the next generation of deep network structures. Among them are the dynamic routing [15] that can dynamically connect the neurons between two consecutive layers based on their output capsule vectors. While these efforts have greatly revolutionized the idea of building a new generation of deep networks, there are still a large room to improve the state of the art for capsule nets. In this paper, we do not intend to introduce some brand new architectures for capsule nets. Instead, we focus on formalizing the principled idea of using the overall length of a capsule rather than ∗Corresponding author: G.-J. Qi, email: [email protected] and [email protected]. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. a single neuron activation to model the presence of an entity [15, 9]. Unlike the existing idea in literature [15, 9], we formulate this idea by learning a group of capsule subspaces to represent a set of entity classes. Once capsule subspaces are learned, we can obtain set of capsules by performing an orthogonal projection of feature vectors onto these capsule subspaces. Then, one can adopt the principle of separating the presence of an entity and its instantiation parameters into capsule length and orientation, respectively. In particular, we use the lengths of capsules to score the presence of entity classes corresponding to different subspaces, while their orientations are used to instantiate the parameters of entity properties such as poses, scales, deformations and textures. In this way, one can use the capsule length to achieve the intra-class invariance in detecting the presence of an entity against appearance variations, as well as model the equivalence of the instantiation parameters of entities by encoding them into capsule orientations [15]. Formally, each capsule subspace is spanned by a basis from the columns of a weight matrix in the neural network. A capsule projection is performed by projecting input feature vectors fed from a backbone network onto the capsule subspace. Specifically, an input feature vector is orthogonally decomposed into the capsule component as the projection onto a capsule subspace and the complement component perpendicular to the subspace. By analyzing the gradient through the capsule projection, one can show that a capsule subspace is iteratively updated along the complement component that contains the novel characteristics of the input feature vector. The training process will continue until all presented feature vectors of an associated class are well contained by the corresponding capsule subspace, or simply the back-propagated error accounting for misclassification caused by capsule lengths vanishes. We call the proposed deep network with the capsule projections the CapProNet for brevity. The CapProNet is friendly to any existing network architecture – it is built upon the embedded features generated by some neural networks and outputs the projected capsule vectors in the subspaces according to different classes. This makes it amenable to be used together with existing network architectures. We will conduct experiments on image datasets to demonstrate the CapProNet can greatly improve the state-of-the-art results by sophisticated networks with only small negligible computing overhead. 1.1 Our Findings Briefly, we summarize our main findings from experiments upfront about the proposed CapProNet. • The proposed CapProNet significantly advances the capsule net performance [15] by reducing its test error from 10.3% and 4.3% on CIFAR10 and SVHN to 3.64% and 1.54% respectively upon the chosen backbones. • The proposed CapProNet can also greatly reduce the error rate of various backbone networks by adding capsule projection layers into these networks. For example, The error rate can be reduced by more than 10− 20% based on Resnet backbone, and by more than 5− 6% based on densenet backbone, with only < 1% and 0.04% computing and memory overhead in training the model compared with the backbones. • The orthogonal projection onto capsule subspaces plays a critical role in delivering competitive performance. On the contrary, simply grouping neurons into capsules could not obviously improve the performance. This shows the capsule projection plays an indispensable role in the CapProNet delivering competitive results. • Our insight into the gradient of capsule projection in Section 2.3 explains the advantage of updating capsule subspaces to continuously contain novel components of training examples until they are correctly classified. We also find that the capsule projection can be viewed as a high-dimensional extension of weight normalization in Section 2.4, where the conventional weight normalization is merely a simple case of the capsule projection onto 1D lines. The source code is available at https://github.com/maple-research-lab. The remainder of this paper is organized as follows. We present the idea of the Capsule Projection Net (CapProNet) in Section 2, and discuss the implementation details in Section 3. The review of related work follows in Section 4, and the experiment results are demonstrated in Section 5. Finally, we conclude the paper and discuss the future work in Section 6. 2 The Capsule Projection Nets In this section, we begin by shortly revisiting the idea of conventional neural networks in classification tasks. Then we formally present the orthogonal projection of input feature vectors onto multiple capsule subspaces where capsule lengths are separated from their orientations to score the presence of entities belonging to different classes. Finally, we analyze the gradient of the resultant capsule projection by showing how capsule subspaces are updated iteratively to adopt novel characteristics of input feature vectors through back-propagation. 2.1 Revisit: Conventional Neural Networks Consider a feature vector x ∈ Rd generated by a deep network to represent an input entity. Given its ground truth label y ∈ {1, 2, · · · , L}, the output layer of the deep network aims to learn a group of weight vectors {w1,w2, · · · ,wL} such that wTy x > w T l x, for all, l 6= y. (1) This hard constraint is usually relaxed to a differentiable softmax objective, and the backpropagation algorithm is performed to train {w1,w2, · · · ,wL} and the backbone network generating the input feature vector x. 2.2 Capsule Projection onto Subspaces Unlike simply grouping neurons to form capsules for classification, we propose to learn a group of capsule subspaces {S1,S2, · · · ,SL}, each associated with one of L classes. Suppose we have a feature vector x ∈ Rd generated by a backbone network from an input sample. Then, to learn a proper feature representation, we project x onto these capsule subspaces, yielding L capsules {v1,v2, · · · ,vL} as projections. Then, we will use the lengths of these capsules to score the probability of the input sample belonging to different classes by assigning it to the one according to the longest capsule. Formally, for each capsule subspace Sl of dimension c, we learn a weight matrix Wl ∈ Rd×c the columns of which form the basis of the subspace, i.e., Sl = span(Wl) is spanned by the column vectors. Then the orthogonal projection vl of a vector x onto Sl is found by solving vl = argminv∈span(Wl) ‖x − v‖2. This orthogonal projection problem has the following closedform solution vl = Plx, and Pl = WlW + l where Pl is called projection matrix 2 for capsule subspace Sl, and W+l is the Moore-Penrose pseudoinverse [4]. When the columns of Wl are independent, W+l becomes (W T l Wl) −1WTl . In this case, since we only need the capsule length ‖vl‖2 to predict the class of an entity, we have ‖vl‖2 = √ vTl vl = √ xTPTl Plx = √ xTWlΣlWTl x (2) where Σl = (WTl Wl) −1 can be seen as a normalization matrix applied to the transformed feature vector WTl x as a way to normalize the Wl-transformation based on the capsule projection. As we will discuss in the next subsection, this normalization plays a critical role in updating Wl along the orthogonal direction of the subspace so that novel components pertaining to the properties of input entities can be gradually updated to the subspace. In practice, since c d, the c columns of Wl are usually independent in a high-dimensional d-D space. Otherwise, one can always add a small I to WTl Wl to avoid the numeric singularity when taking the matrix inverse. Later on, we will discuss a fast iterative algorithm to compute the matrix inverse with a hyper-power sequence that can be seamlessly integrated with the back-propagation iterations. 2A projection matrix P for a subspace S is a symmetric idempotent matrix (i.e., PT = P and P2 = P) such that its range space is S. 2.3 Insight into Gradients In this section, we take a look at the gradient used to update Wl in each iteration, which can give us some insight into how the CapProNet works in learning the capsule subspaces. Suppose we minimize a loss function ` to train the capsule projection and the network. For simplicity, we only consider a single sample x and its capsule vl. Then by the chain rule and the differential of inverse matrix [13], we have the following gradient of ` wrt Wl ∂` ∂Wl = ∂` ∂‖vl‖2 ∂‖vl‖ ∂Wl = ∂` ∂‖vl‖2 (I−Pl)xxTW+Tl ‖vl‖2 (3) where the operator (I−Pl) can be viewed as the projection onto the orthogonal complement of the capsule subspace spanned by the columns of Wl, W+Tl denotes the transpose of W + l , and the factor ∂` ∂‖vl‖2 is the back-propagated error accounting for misclassification caused by ‖vl‖2. Denote by x⊥ , (I−Pl)x the projection of x onto the orthogonal component perpendicular to the current capsule subspace Sl. Then, the above gradient ∂`∂Wl only contains the columns parallel to x ⊥ (up to coefficients in the vector xTW+Tl ). This shows that the basis of the current capsule subspace Sl in the columns of Wl is updated along this orthogonal component of the input x to the subspace. One can regard x⊥ as representing the novel component of x not yet contained in the current Sl, it shows capsule subspaces are updated to contain the novel component of each input feature vector until all training feature vectors are well contained in these subspaces, or the back-propagated errors vanish that account for misclassification caused by ‖vl‖2. Figure 1 illustrates an example of 2-D capsule subspace S spanned by two basis vectors w1 and w2. An input feature vector x is decomposed into the capsule projection v onto S and an orthogonal complement x⊥ perpendicular to the subspace. In one training iteration, two basis vectors w1 and w2 are updated to w′1 and w ′ 2 along the orthogonal direction x⊥, where x⊥ is viewed as containing novel characteristics of an entity not yet contained by S. 2.4 A Perspective of Multiple-Dimensional Weight Normalization As discussed in the last subsection and Figure 2, we can explain the orthogonal components represent the novel information in input data, and the orthogonal decomposition thus enables us to update capsule subspaces by more effectively incorporating novel characteristics/components than the classic capsule nets. One can also view the capsule projection as normalizing the column basis of weight matrix Wl simultaneously in a high-dimensional capsule space. If the capsule dimension c is set to 1, it is not hard to see that Eq. (2) can be rewritten by setting vl to |WTl x| ‖Wl‖ . It produces the conventional weight normalization of the vector Wl ∈ R d, as a special 1D case of the capsule projection. As the capsule dimension c grows, Wl can be normalized by replacing vl with Σ 1/2 l W T l x, which keeps ‖vl‖ unchanged in Eq. (2). This enables us to extend the conventional weight normalization to high dimensional capsule subspaces. 3 Implementation Details We will discuss some implementation details in this section, including 1) the computing cost to perform capsule projection and a fast iterative method by using hyper-power sequences without restart; 2) the objective to train the capsule projection. 3.1 Computing Normalization Matrix Taking a matrix inverse to get the normalization matrix Σl would be expensive with an increasing dimension c. But after the model is trained, it is fixed in the inference with only one-time computing. Fortunately, the dimension c of a capsule subspace is usually much smaller than the feature dimension d that is usually hundreds and even thousands. For example, c is typically no larger than 8 in experiments. Thus, taking a matrix inverse to compute these normalization matrices only incurs a small negligible computing overhead compared with the training of many other layers in a deep network. Alternatively, one can take advantage of an iterative algorithm to compute the normalization matrix. We consider the following hyper-power sequence Σl ← 2Σl −ΣlWTl WlΣl which has proven to converge to (WTW)−1 with a proper initial point [2, 3]. In stochastic gradient method, since only a small change is made to update Wl in each training iteration, thus it is often sufficient to use this recursion to make an one-step update on the normalization matrix from the last iteration. The normalization matrix Σl can be initialized to (WTl Wl) −1 at the very first iteration to give an ideal start. This could further save computing cost in training the network. In experiments, a very small computing overhead was incurred in the capsule projection. For example, training the ResNet110 on CIFAR10/100 costed about 0.16 seconds per iteration on a batch of 128 images. In comparison, training the CapProNet with a ResNet110 backbone in an end-to-end fashion only costed an additional < 0.001 seconds per iteration, that is less than 1% computing overhead for the CapProNet compared with its backbone. For the inference, we did not find any noticeable computing overhead for the CapProNet compared with its backbone network. 3.2 Training Capsule Projections Given a group of capsule vectors {v1,v2, · · · ,vL} corresponding to a feature vector x and its ground truth label y, we train the model by requiring ‖vy‖2 > ‖vl‖2, for all, l 6= y. In other words, we require ‖vy‖2 should be larger than all the length of the other capsules. As a consequence, we can minimize the following negative logarithmic softmax function `(x, y) = − log exp(‖vy‖2)∑L l=1 exp(‖vl‖2) to train the capsule subspaces and the network generating x through back- propagation in an end-to-end fashion. Once the model is trained, we will classify a test sample into the class with the longest capsule. 4 Related Work The presented CapProNets are inspired by the CapsuleNets by adopting the idea of using a capsule vector rather than a neural activation output to predict the presence of an entity and its properties [15, 9]. In particular, the overall length of a capsule vector is used to represent the existence of the entity and its direction instantiates the properties of the entity. We formalize this idea in this paper by explicitly learning a group of capsule subspaces and project embedded features onto these subspaces. The advantage of these capsule subspaces is their directions can represent characteristics of an entity, which contains much richer information, such as its positions, orientations, scales and textures, than a single activation output. By performing an orthogonal projection of an input feature vector onto a capsule subspace, one can find the best direction revealing these properties. Otherwise, the entity is thought of being absent as the projection vanishes when the input feature vector is nearly perpendicular to the capsule subspace. 5 Experiments We conduct experiments on benchmark datasets to evaluate the proposed CapProNet compared with the other deep network models. 5.1 Datasets We use both CIFAR and SVHN datasets in experiments to evaluate the performance. CIFAR The CIFAR dataset contains 50, 000 and 10, 000 images of 32× 32 pixels for the training and test sets respectively. A standard data augmentation is adopted with horizonal flipping and shifting. The images are labeled with 10 and 100 categories, namely CIFAR10 and CIFAR100 datasets. A separate validation set of 5, 000 images are split from the training set to choose the model hyperparameters, and the final test errors are reported with the chosen hyperparameters by training the model on all 50, 000 training images. SVHN The Street View House Number (SVHN) dataset has 73, 257 and 26, 032 images of colored digits in the training and test sets, with an additional 531, 131 training images available. Following the widely used evaluation protocol in literature [5, 11, 12, 16], all the training examples are used without data augmentation, while a separate validation set of 6, 000 images is split from the training set. The model with the smallest validation error is selected and the error rate is reported. ImageNet The ImageNet data-set consists of 1.2 million training and 50k validation images. We apply mean image subtraction as the only pre-processing step on images and use random cropping, scaling and horizontal flipping for data augmentation [6]. The final resolution of both train and validation sets is 224× 224, and 20k images are chosen randomly from training set for tuning hyper parameters. 5.2 Backbone Networks We test various networks such as ResNet [6], ResNet (pre-activation) [7], WideResNet [18] and Densenet [10] as the backbones in experiments. The last output layer of a backbone network is replaced by the capsule projection, where the feature vector from the second last layer of the backbone is projected onto multiple capsule subspaces. The CapProNet is trained from the scratch in an end-to-end fashion on the given training set. For the sake of fair comparison, the strategies used to train the respective backbones [6, 7, 18], such as the learning rate schedule, parameter initialization, and the stochastic optimization solver, are adopted to train the CapProNet. We will denote the CapProNet with a backbone X by CapProNet+X below. 5.3 Results We perform experiments with various networks as backbones for comparison with the proposed CapProNet. In particular, we consider three variants of ResNets – the classic one reported in [11] with 110 layers, the ResNet with pre-activation [7] with 164 layers, and two paradigms of WideResNets [18] with 16 and 28 layers, as well as densenet-BC [10] with 100 layers. Compared with ResNet and ResNet with pre-activation, WideResNet has fewer but wider layers that reaches smaller error rates as shown in Table 1. We test the CapProNet+X with these different backbone networks to evaluate if it can consistently improve these state-of-the-art backbones. It is clear from Table 1 that the CapProNet+X outperforms the corresponding backbone networks by a remarkable margin. For example, the CapProNet+ResNet reduces the error rate by 19%, 17.5% and 10% on CIFAR10, CIFAR100 and SVHN, while CapProNet+Densenet reduces the error rate by 5.8%, 4.8% and 6.8% respectively. Finally, we note that the CapProNet significantly advances the capsule net performance [15] by reducing its test error from 10.3% and 4.3% on CIFAR10 and SVHN to 3.64% and 1.54% respectively based on the chosen backbones. We also evaluate the CapProNet with Resnet50 and Resnet101 backbones for single crop Top-1/Top-5 results on ImageNet validation set. To ensure fair comparison, we retrain the backbone networks based on the offical Resnet model3, where both original Resnet[6] and CapProNet are trained with the same training strategies on four GPUs. The results are reported in Table 2, where CapProNet+X 3https://github.com/tensorflow/models/tree/master/official/resnet successfully outperforms the original backbones on both Top-1 and Top-5 error rates. It is worth noting the gains are only obtained with the last layer of backbones replaced by the capsule project layer. We believe the error rate can be further reduced by replacing the intermediate convolutional layers with the capsule projections, and we leave it to our future research. We also note that the CapProNet+X consistently outperforms the backbone counterparts with varying dimensions c of capsule subspaces. In particular, with the WideResNet backbones, in most cases, the error rates are reduced with an increasing capsule dimension c on all datasets, where the smallest error rates often occur at c = 8. In contrast, while CapProNet+X still clearly outperforms both ResNet and ResNet (pre-activation) backbones, the error rates are roughly at the same level. This is probably because both ResNet backbones have a much smaller input dimension d = 64 of feature vectors into the capsule projection than that of WideResNet backbone where d = 128 and d = 160 with 16 and 28 layers, respectively. This turns out to suggest that a larger input dimension can enable to use capsule subspaces of higher dimensions to encode patterns of variations along more directions in a higher dimensional input feature space. To further assess the effect of capsule projection, we compare with the method that simply groups the output neurons into capsules without performing orthogonal projection onto capsule subspaces. We still use the lengths of these resultant “capsules" of grouped neurons to classify input images and the model is trained in an end-to-end fashion accordingly. Unfortunately, this approach, namely GroupNeuron+ResNet in Table 3, does not show significant improvement over the backbone network. For example, the smallest error rate by GroupNeuron+ResNet is 6.26 at c = 2, a small improvement over the error rate of 6.41 reached by ResNet110. This demonstrates the capsule projection makes an indispensable contribution to improving model performances. When training on CIFAR10/100 and SVHN, one iteration typically costs ∼ 0.16 seconds for Resnet110, with an additional less than 0.01 second to train the corresponding CapProNet. That is less than 1% computing overhead. The memory overhead for the model parameters is even smaller. For example, the CapProNet+ResNet only has an additional 640− 6400 parameters at c = 2 compared with 1.7M parameters in the backbone ResNet. We do not notice any large computing or memory overheads with the ResNet (pre-activation) or WideResNet, either. This shows the advantage of CapProNet+X as its error rate reduction is not achieved by consuming much more computing and memory resources. 5.4 Visualization of Projections onto Capsule Subspaces Table 3: Comparison between GroupNeuron and CapProNet with the ResNet110 backbone on CIFAR10 dataset. The best results are highlighted in bold for c = 2, 4, 8 capsules. It shows the need of capsule projection to obtain better results. c GroupNeuron CapProNet 2 6.26 5.24 4 6.29 5.27 8 6.42 5.19 To give an intuitive insight into the learned capsule subspaces, we plot the projection of input feature vectors onto capsule subspaces. Instead of directly using Plx to project feature vectors onto capsule subspaces in the original input space Rd, we use (WTl Wl)− 1 2 WTl x to project an input feature vector x onto Rc, since this projection preserves the capsule length ‖vl‖2 defined in (2). Figure 2 illustrates the 2-D capsule subspaces learned on CIFAR10 when c = 2 and d = 64 in CapProNet+ResNet110, where each subspace corresponds to one of ten classes. Red points represent the capsules projected from the class of input samples corresponding to the subspace while green points correspond to one of the other classes. The figure shows that red capsules have larger length than green ones, which suggests the capsule length is a valid metric to classify samples into their corresponding classes. Meanwhile, the orientation of a capsule reflects various instantiations of a sample in these subspaces. These figures visualize the separation of the lengths of capsules from their orientations in classification tasks. 6 Conclusions and Future Work In this paper, we present a novel capsule project network by learning a group of capsule subspaces for different classes. Specifically, the parameters of an orthogonal projection is learned for each class and the lengths of projected capsules are used to predict the entity class for a given input feature vector. The training continues until the capsule subspaces contain input feature vectors of corresponding classes or the back-propagated error vanishes. Experiment results on real image datasets show that the proposed CapProNet+X could greatly improve the performance of backbone network without incurring large computing and memory overheads. While we only test the capsule projection as the output layer in this paper, we will attempt to insert it into intermediate layers of backbone networks as well, and hope this could give rise to a new generation of capsule networks with more discriminative architectures in future. Acknowledgements L. Zhang and M. Edraki made equal contributions to implementing the idea: L. Zhang conducted experiments on CIFAR10 and SVHN datasets, and visualized projections in capsule subspaces on CIFAR10. M. Edraki performed experiments on CIFAR100. G.-J. Qi initialized and formulated the idea, and prepared the paper.
1. What is the main contribution of the paper, and what are the strengths and weaknesses of the proposed approach? 2. What is the motivation behind using a capsule projection layer, and how does it differ from traditional classification or manifold learning techniques? 3. How does the capsule projection layer work, and what is the purpose of projecting the input feature vector onto a learnt capsule subspace? 4. Can you provide more information about the component perpendicular to the subspace and its role in detecting novel characteristics? 5. How does the proposed technique compare with standard dimensionality reduction techniques or a simple dimension-reducing matrix transformation? 6. What is the significance of choosing a smaller subspace dimension c, and could a larger c potentially lead to better separability? 7. Would it be possible to include additional baselines such as DenseNet and capsule networks in future experiments? 8. Does the capsule projection layer work equally well with different types of backbone networks, such as InceptionNet or VGGNet/AlexNet?
Review
Review This paper introduces an alternative to CNN based architectures being inspired by the recently proposed capsule networks. The authors proposed to replace the last layer of ResNet variants by a capsule projection network, thereby getting promising results on the CIFAR and SVHN datasets. However, the motivation for using a capsule projection layer is unclear even though the technique is straightforward and easy to implement with minor computational overhead. The main idea of the capsule projection layer is to project the input feature vector to some learnt capsule subspaces (one for each class in classification setting), which are then used to distinguish between the different classes in classification. The authors also show that this projection technique leads to computation of gradients which are orthogonal to the learnt subspace, enabling discovery of novel characteristics leading to improvement of the learnt subspace. They have also shown interesting visualizations indicating separability of the samples for every classes. The quantitative results in this paper are encouraging when compared with the baselines used. Strengths : 1. Straightforward idea which is easy to implement with minimal computational overhead. 2. Promising experimental results with interesting visualizations. Weaknesses: 1. The motivation or the need for this technique is unclear. It would have been great to have some intuition why replacing last layer of ResNets by capsule projection layer is necessary and why should it work. 2. The paper is not very well-written, possibly hurriedly written, so not easy to read. A lot is left desired in presentation and formatting, especially in figures/tables. 3. Even though the technique is novel, the contributions of this paper is not very significant. Also, there is not much attempt in contrasting this technique with traditional classification or manifold learning literature. 4. There are a lot of missing entries in the experimental results table and it is not clear why. Questions for authors: Why is the input feature vector from backbone network needed to be decomposed into the capsule subspace component and also its component perpendicular to the subspace? What shortcomings in the current techniques lead to such a design? What purpose is the component perpendicular to the subspace serving? The authors state that this component appears in the gradient and helps in detecting novel characteristics. However, the gradient (Eq 3) does not only contain the perpendicular component but also another term x^T W_l^{+T} - is not this transformation similar to P_l x (the projection to the subspace). How to interpret this term in the gradient? Moreover, should we interpret the projection onto subspace as a dimensionality reduction technique? If so, how does it compare with standard dimensionality reduction techniques or a simple dimension-reducing matrix transformation? What does "grouping neurons to form capsules" mean - any reference or explanation would be useful? Any insights into why orthogonal projection is needed will be helpful. Are there any reason why subspace dimension c was chosen to be in smaller ranges apart from computational aspect/independence assumption? Is it possible that a larger c can lead to better separability? Regarding experiments, it will be good to have baselines like densenet, capsule networks (Dynamic routing between capsules, Sabour et al NIPS 2017 - they have also tried out on CIFAR10). Moreover it will be interesting to see if the capsule projection layer is working well only if the backbone network is a ResNet type network or does it help even when backbone is InceptionNet or VGGNet/AlexNet.
NIPS
Title CapProNet: Deep Feature Learning via Orthogonal Projections onto Capsule Subspaces Abstract In this paper, we formalize the idea behind capsule nets of using a capsule vector rather than a neuron activation to predict the label of samples. To this end, we propose to learn a group of capsule subspaces onto which an input feature vector is projected. Then the lengths of resultant capsules are used to score the probability of belonging to different classes. We train such a Capsule Projection Network (CapProNet) by learning an orthogonal projection matrix for each capsule subspace, and show that each capsule subspace is updated until it contains input feature vectors corresponding to the associated class. We will also show that the capsule projection can be viewed as normalizing the multiple columns of the weight matrix simultaneously to form an orthogonal basis, which makes it more effective in incorporating novel components of input features to update capsule representations. In other words, the capsule projection can be viewed as a multi-dimensional weight normalization in capsule subspaces, where the conventional weight normalization is simply a special case of the capsule projection onto 1D lines. Only a small negligible computing overhead is incurred to train the network in low-dimensional capsule subspaces or through an alternative hyper-power iteration to estimate the normalization matrix. Experiment results on image datasets show the presented model can greatly improve the performance of the state-of-the-art ResNet backbones by 10− 20% and that of the Densenet by 5− 7% respectively at the same level of computing and memory expenses. The CapProNet establishes the competitive state-of-the-art performance for the family of capsule nets by significantly reducing test errors on the benchmark datasets. 1 Introduction Since the idea of capsule net [15, 9] was proposed, many efforts [8, 17, 14, 1] have been made to seek better capsule architectures as the next generation of deep network structures. Among them are the dynamic routing [15] that can dynamically connect the neurons between two consecutive layers based on their output capsule vectors. While these efforts have greatly revolutionized the idea of building a new generation of deep networks, there are still a large room to improve the state of the art for capsule nets. In this paper, we do not intend to introduce some brand new architectures for capsule nets. Instead, we focus on formalizing the principled idea of using the overall length of a capsule rather than ∗Corresponding author: G.-J. Qi, email: [email protected] and [email protected]. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. a single neuron activation to model the presence of an entity [15, 9]. Unlike the existing idea in literature [15, 9], we formulate this idea by learning a group of capsule subspaces to represent a set of entity classes. Once capsule subspaces are learned, we can obtain set of capsules by performing an orthogonal projection of feature vectors onto these capsule subspaces. Then, one can adopt the principle of separating the presence of an entity and its instantiation parameters into capsule length and orientation, respectively. In particular, we use the lengths of capsules to score the presence of entity classes corresponding to different subspaces, while their orientations are used to instantiate the parameters of entity properties such as poses, scales, deformations and textures. In this way, one can use the capsule length to achieve the intra-class invariance in detecting the presence of an entity against appearance variations, as well as model the equivalence of the instantiation parameters of entities by encoding them into capsule orientations [15]. Formally, each capsule subspace is spanned by a basis from the columns of a weight matrix in the neural network. A capsule projection is performed by projecting input feature vectors fed from a backbone network onto the capsule subspace. Specifically, an input feature vector is orthogonally decomposed into the capsule component as the projection onto a capsule subspace and the complement component perpendicular to the subspace. By analyzing the gradient through the capsule projection, one can show that a capsule subspace is iteratively updated along the complement component that contains the novel characteristics of the input feature vector. The training process will continue until all presented feature vectors of an associated class are well contained by the corresponding capsule subspace, or simply the back-propagated error accounting for misclassification caused by capsule lengths vanishes. We call the proposed deep network with the capsule projections the CapProNet for brevity. The CapProNet is friendly to any existing network architecture – it is built upon the embedded features generated by some neural networks and outputs the projected capsule vectors in the subspaces according to different classes. This makes it amenable to be used together with existing network architectures. We will conduct experiments on image datasets to demonstrate the CapProNet can greatly improve the state-of-the-art results by sophisticated networks with only small negligible computing overhead. 1.1 Our Findings Briefly, we summarize our main findings from experiments upfront about the proposed CapProNet. • The proposed CapProNet significantly advances the capsule net performance [15] by reducing its test error from 10.3% and 4.3% on CIFAR10 and SVHN to 3.64% and 1.54% respectively upon the chosen backbones. • The proposed CapProNet can also greatly reduce the error rate of various backbone networks by adding capsule projection layers into these networks. For example, The error rate can be reduced by more than 10− 20% based on Resnet backbone, and by more than 5− 6% based on densenet backbone, with only < 1% and 0.04% computing and memory overhead in training the model compared with the backbones. • The orthogonal projection onto capsule subspaces plays a critical role in delivering competitive performance. On the contrary, simply grouping neurons into capsules could not obviously improve the performance. This shows the capsule projection plays an indispensable role in the CapProNet delivering competitive results. • Our insight into the gradient of capsule projection in Section 2.3 explains the advantage of updating capsule subspaces to continuously contain novel components of training examples until they are correctly classified. We also find that the capsule projection can be viewed as a high-dimensional extension of weight normalization in Section 2.4, where the conventional weight normalization is merely a simple case of the capsule projection onto 1D lines. The source code is available at https://github.com/maple-research-lab. The remainder of this paper is organized as follows. We present the idea of the Capsule Projection Net (CapProNet) in Section 2, and discuss the implementation details in Section 3. The review of related work follows in Section 4, and the experiment results are demonstrated in Section 5. Finally, we conclude the paper and discuss the future work in Section 6. 2 The Capsule Projection Nets In this section, we begin by shortly revisiting the idea of conventional neural networks in classification tasks. Then we formally present the orthogonal projection of input feature vectors onto multiple capsule subspaces where capsule lengths are separated from their orientations to score the presence of entities belonging to different classes. Finally, we analyze the gradient of the resultant capsule projection by showing how capsule subspaces are updated iteratively to adopt novel characteristics of input feature vectors through back-propagation. 2.1 Revisit: Conventional Neural Networks Consider a feature vector x ∈ Rd generated by a deep network to represent an input entity. Given its ground truth label y ∈ {1, 2, · · · , L}, the output layer of the deep network aims to learn a group of weight vectors {w1,w2, · · · ,wL} such that wTy x > w T l x, for all, l 6= y. (1) This hard constraint is usually relaxed to a differentiable softmax objective, and the backpropagation algorithm is performed to train {w1,w2, · · · ,wL} and the backbone network generating the input feature vector x. 2.2 Capsule Projection onto Subspaces Unlike simply grouping neurons to form capsules for classification, we propose to learn a group of capsule subspaces {S1,S2, · · · ,SL}, each associated with one of L classes. Suppose we have a feature vector x ∈ Rd generated by a backbone network from an input sample. Then, to learn a proper feature representation, we project x onto these capsule subspaces, yielding L capsules {v1,v2, · · · ,vL} as projections. Then, we will use the lengths of these capsules to score the probability of the input sample belonging to different classes by assigning it to the one according to the longest capsule. Formally, for each capsule subspace Sl of dimension c, we learn a weight matrix Wl ∈ Rd×c the columns of which form the basis of the subspace, i.e., Sl = span(Wl) is spanned by the column vectors. Then the orthogonal projection vl of a vector x onto Sl is found by solving vl = argminv∈span(Wl) ‖x − v‖2. This orthogonal projection problem has the following closedform solution vl = Plx, and Pl = WlW + l where Pl is called projection matrix 2 for capsule subspace Sl, and W+l is the Moore-Penrose pseudoinverse [4]. When the columns of Wl are independent, W+l becomes (W T l Wl) −1WTl . In this case, since we only need the capsule length ‖vl‖2 to predict the class of an entity, we have ‖vl‖2 = √ vTl vl = √ xTPTl Plx = √ xTWlΣlWTl x (2) where Σl = (WTl Wl) −1 can be seen as a normalization matrix applied to the transformed feature vector WTl x as a way to normalize the Wl-transformation based on the capsule projection. As we will discuss in the next subsection, this normalization plays a critical role in updating Wl along the orthogonal direction of the subspace so that novel components pertaining to the properties of input entities can be gradually updated to the subspace. In practice, since c d, the c columns of Wl are usually independent in a high-dimensional d-D space. Otherwise, one can always add a small I to WTl Wl to avoid the numeric singularity when taking the matrix inverse. Later on, we will discuss a fast iterative algorithm to compute the matrix inverse with a hyper-power sequence that can be seamlessly integrated with the back-propagation iterations. 2A projection matrix P for a subspace S is a symmetric idempotent matrix (i.e., PT = P and P2 = P) such that its range space is S. 2.3 Insight into Gradients In this section, we take a look at the gradient used to update Wl in each iteration, which can give us some insight into how the CapProNet works in learning the capsule subspaces. Suppose we minimize a loss function ` to train the capsule projection and the network. For simplicity, we only consider a single sample x and its capsule vl. Then by the chain rule and the differential of inverse matrix [13], we have the following gradient of ` wrt Wl ∂` ∂Wl = ∂` ∂‖vl‖2 ∂‖vl‖ ∂Wl = ∂` ∂‖vl‖2 (I−Pl)xxTW+Tl ‖vl‖2 (3) where the operator (I−Pl) can be viewed as the projection onto the orthogonal complement of the capsule subspace spanned by the columns of Wl, W+Tl denotes the transpose of W + l , and the factor ∂` ∂‖vl‖2 is the back-propagated error accounting for misclassification caused by ‖vl‖2. Denote by x⊥ , (I−Pl)x the projection of x onto the orthogonal component perpendicular to the current capsule subspace Sl. Then, the above gradient ∂`∂Wl only contains the columns parallel to x ⊥ (up to coefficients in the vector xTW+Tl ). This shows that the basis of the current capsule subspace Sl in the columns of Wl is updated along this orthogonal component of the input x to the subspace. One can regard x⊥ as representing the novel component of x not yet contained in the current Sl, it shows capsule subspaces are updated to contain the novel component of each input feature vector until all training feature vectors are well contained in these subspaces, or the back-propagated errors vanish that account for misclassification caused by ‖vl‖2. Figure 1 illustrates an example of 2-D capsule subspace S spanned by two basis vectors w1 and w2. An input feature vector x is decomposed into the capsule projection v onto S and an orthogonal complement x⊥ perpendicular to the subspace. In one training iteration, two basis vectors w1 and w2 are updated to w′1 and w ′ 2 along the orthogonal direction x⊥, where x⊥ is viewed as containing novel characteristics of an entity not yet contained by S. 2.4 A Perspective of Multiple-Dimensional Weight Normalization As discussed in the last subsection and Figure 2, we can explain the orthogonal components represent the novel information in input data, and the orthogonal decomposition thus enables us to update capsule subspaces by more effectively incorporating novel characteristics/components than the classic capsule nets. One can also view the capsule projection as normalizing the column basis of weight matrix Wl simultaneously in a high-dimensional capsule space. If the capsule dimension c is set to 1, it is not hard to see that Eq. (2) can be rewritten by setting vl to |WTl x| ‖Wl‖ . It produces the conventional weight normalization of the vector Wl ∈ R d, as a special 1D case of the capsule projection. As the capsule dimension c grows, Wl can be normalized by replacing vl with Σ 1/2 l W T l x, which keeps ‖vl‖ unchanged in Eq. (2). This enables us to extend the conventional weight normalization to high dimensional capsule subspaces. 3 Implementation Details We will discuss some implementation details in this section, including 1) the computing cost to perform capsule projection and a fast iterative method by using hyper-power sequences without restart; 2) the objective to train the capsule projection. 3.1 Computing Normalization Matrix Taking a matrix inverse to get the normalization matrix Σl would be expensive with an increasing dimension c. But after the model is trained, it is fixed in the inference with only one-time computing. Fortunately, the dimension c of a capsule subspace is usually much smaller than the feature dimension d that is usually hundreds and even thousands. For example, c is typically no larger than 8 in experiments. Thus, taking a matrix inverse to compute these normalization matrices only incurs a small negligible computing overhead compared with the training of many other layers in a deep network. Alternatively, one can take advantage of an iterative algorithm to compute the normalization matrix. We consider the following hyper-power sequence Σl ← 2Σl −ΣlWTl WlΣl which has proven to converge to (WTW)−1 with a proper initial point [2, 3]. In stochastic gradient method, since only a small change is made to update Wl in each training iteration, thus it is often sufficient to use this recursion to make an one-step update on the normalization matrix from the last iteration. The normalization matrix Σl can be initialized to (WTl Wl) −1 at the very first iteration to give an ideal start. This could further save computing cost in training the network. In experiments, a very small computing overhead was incurred in the capsule projection. For example, training the ResNet110 on CIFAR10/100 costed about 0.16 seconds per iteration on a batch of 128 images. In comparison, training the CapProNet with a ResNet110 backbone in an end-to-end fashion only costed an additional < 0.001 seconds per iteration, that is less than 1% computing overhead for the CapProNet compared with its backbone. For the inference, we did not find any noticeable computing overhead for the CapProNet compared with its backbone network. 3.2 Training Capsule Projections Given a group of capsule vectors {v1,v2, · · · ,vL} corresponding to a feature vector x and its ground truth label y, we train the model by requiring ‖vy‖2 > ‖vl‖2, for all, l 6= y. In other words, we require ‖vy‖2 should be larger than all the length of the other capsules. As a consequence, we can minimize the following negative logarithmic softmax function `(x, y) = − log exp(‖vy‖2)∑L l=1 exp(‖vl‖2) to train the capsule subspaces and the network generating x through back- propagation in an end-to-end fashion. Once the model is trained, we will classify a test sample into the class with the longest capsule. 4 Related Work The presented CapProNets are inspired by the CapsuleNets by adopting the idea of using a capsule vector rather than a neural activation output to predict the presence of an entity and its properties [15, 9]. In particular, the overall length of a capsule vector is used to represent the existence of the entity and its direction instantiates the properties of the entity. We formalize this idea in this paper by explicitly learning a group of capsule subspaces and project embedded features onto these subspaces. The advantage of these capsule subspaces is their directions can represent characteristics of an entity, which contains much richer information, such as its positions, orientations, scales and textures, than a single activation output. By performing an orthogonal projection of an input feature vector onto a capsule subspace, one can find the best direction revealing these properties. Otherwise, the entity is thought of being absent as the projection vanishes when the input feature vector is nearly perpendicular to the capsule subspace. 5 Experiments We conduct experiments on benchmark datasets to evaluate the proposed CapProNet compared with the other deep network models. 5.1 Datasets We use both CIFAR and SVHN datasets in experiments to evaluate the performance. CIFAR The CIFAR dataset contains 50, 000 and 10, 000 images of 32× 32 pixels for the training and test sets respectively. A standard data augmentation is adopted with horizonal flipping and shifting. The images are labeled with 10 and 100 categories, namely CIFAR10 and CIFAR100 datasets. A separate validation set of 5, 000 images are split from the training set to choose the model hyperparameters, and the final test errors are reported with the chosen hyperparameters by training the model on all 50, 000 training images. SVHN The Street View House Number (SVHN) dataset has 73, 257 and 26, 032 images of colored digits in the training and test sets, with an additional 531, 131 training images available. Following the widely used evaluation protocol in literature [5, 11, 12, 16], all the training examples are used without data augmentation, while a separate validation set of 6, 000 images is split from the training set. The model with the smallest validation error is selected and the error rate is reported. ImageNet The ImageNet data-set consists of 1.2 million training and 50k validation images. We apply mean image subtraction as the only pre-processing step on images and use random cropping, scaling and horizontal flipping for data augmentation [6]. The final resolution of both train and validation sets is 224× 224, and 20k images are chosen randomly from training set for tuning hyper parameters. 5.2 Backbone Networks We test various networks such as ResNet [6], ResNet (pre-activation) [7], WideResNet [18] and Densenet [10] as the backbones in experiments. The last output layer of a backbone network is replaced by the capsule projection, where the feature vector from the second last layer of the backbone is projected onto multiple capsule subspaces. The CapProNet is trained from the scratch in an end-to-end fashion on the given training set. For the sake of fair comparison, the strategies used to train the respective backbones [6, 7, 18], such as the learning rate schedule, parameter initialization, and the stochastic optimization solver, are adopted to train the CapProNet. We will denote the CapProNet with a backbone X by CapProNet+X below. 5.3 Results We perform experiments with various networks as backbones for comparison with the proposed CapProNet. In particular, we consider three variants of ResNets – the classic one reported in [11] with 110 layers, the ResNet with pre-activation [7] with 164 layers, and two paradigms of WideResNets [18] with 16 and 28 layers, as well as densenet-BC [10] with 100 layers. Compared with ResNet and ResNet with pre-activation, WideResNet has fewer but wider layers that reaches smaller error rates as shown in Table 1. We test the CapProNet+X with these different backbone networks to evaluate if it can consistently improve these state-of-the-art backbones. It is clear from Table 1 that the CapProNet+X outperforms the corresponding backbone networks by a remarkable margin. For example, the CapProNet+ResNet reduces the error rate by 19%, 17.5% and 10% on CIFAR10, CIFAR100 and SVHN, while CapProNet+Densenet reduces the error rate by 5.8%, 4.8% and 6.8% respectively. Finally, we note that the CapProNet significantly advances the capsule net performance [15] by reducing its test error from 10.3% and 4.3% on CIFAR10 and SVHN to 3.64% and 1.54% respectively based on the chosen backbones. We also evaluate the CapProNet with Resnet50 and Resnet101 backbones for single crop Top-1/Top-5 results on ImageNet validation set. To ensure fair comparison, we retrain the backbone networks based on the offical Resnet model3, where both original Resnet[6] and CapProNet are trained with the same training strategies on four GPUs. The results are reported in Table 2, where CapProNet+X 3https://github.com/tensorflow/models/tree/master/official/resnet successfully outperforms the original backbones on both Top-1 and Top-5 error rates. It is worth noting the gains are only obtained with the last layer of backbones replaced by the capsule project layer. We believe the error rate can be further reduced by replacing the intermediate convolutional layers with the capsule projections, and we leave it to our future research. We also note that the CapProNet+X consistently outperforms the backbone counterparts with varying dimensions c of capsule subspaces. In particular, with the WideResNet backbones, in most cases, the error rates are reduced with an increasing capsule dimension c on all datasets, where the smallest error rates often occur at c = 8. In contrast, while CapProNet+X still clearly outperforms both ResNet and ResNet (pre-activation) backbones, the error rates are roughly at the same level. This is probably because both ResNet backbones have a much smaller input dimension d = 64 of feature vectors into the capsule projection than that of WideResNet backbone where d = 128 and d = 160 with 16 and 28 layers, respectively. This turns out to suggest that a larger input dimension can enable to use capsule subspaces of higher dimensions to encode patterns of variations along more directions in a higher dimensional input feature space. To further assess the effect of capsule projection, we compare with the method that simply groups the output neurons into capsules without performing orthogonal projection onto capsule subspaces. We still use the lengths of these resultant “capsules" of grouped neurons to classify input images and the model is trained in an end-to-end fashion accordingly. Unfortunately, this approach, namely GroupNeuron+ResNet in Table 3, does not show significant improvement over the backbone network. For example, the smallest error rate by GroupNeuron+ResNet is 6.26 at c = 2, a small improvement over the error rate of 6.41 reached by ResNet110. This demonstrates the capsule projection makes an indispensable contribution to improving model performances. When training on CIFAR10/100 and SVHN, one iteration typically costs ∼ 0.16 seconds for Resnet110, with an additional less than 0.01 second to train the corresponding CapProNet. That is less than 1% computing overhead. The memory overhead for the model parameters is even smaller. For example, the CapProNet+ResNet only has an additional 640− 6400 parameters at c = 2 compared with 1.7M parameters in the backbone ResNet. We do not notice any large computing or memory overheads with the ResNet (pre-activation) or WideResNet, either. This shows the advantage of CapProNet+X as its error rate reduction is not achieved by consuming much more computing and memory resources. 5.4 Visualization of Projections onto Capsule Subspaces Table 3: Comparison between GroupNeuron and CapProNet with the ResNet110 backbone on CIFAR10 dataset. The best results are highlighted in bold for c = 2, 4, 8 capsules. It shows the need of capsule projection to obtain better results. c GroupNeuron CapProNet 2 6.26 5.24 4 6.29 5.27 8 6.42 5.19 To give an intuitive insight into the learned capsule subspaces, we plot the projection of input feature vectors onto capsule subspaces. Instead of directly using Plx to project feature vectors onto capsule subspaces in the original input space Rd, we use (WTl Wl)− 1 2 WTl x to project an input feature vector x onto Rc, since this projection preserves the capsule length ‖vl‖2 defined in (2). Figure 2 illustrates the 2-D capsule subspaces learned on CIFAR10 when c = 2 and d = 64 in CapProNet+ResNet110, where each subspace corresponds to one of ten classes. Red points represent the capsules projected from the class of input samples corresponding to the subspace while green points correspond to one of the other classes. The figure shows that red capsules have larger length than green ones, which suggests the capsule length is a valid metric to classify samples into their corresponding classes. Meanwhile, the orientation of a capsule reflects various instantiations of a sample in these subspaces. These figures visualize the separation of the lengths of capsules from their orientations in classification tasks. 6 Conclusions and Future Work In this paper, we present a novel capsule project network by learning a group of capsule subspaces for different classes. Specifically, the parameters of an orthogonal projection is learned for each class and the lengths of projected capsules are used to predict the entity class for a given input feature vector. The training continues until the capsule subspaces contain input feature vectors of corresponding classes or the back-propagated error vanishes. Experiment results on real image datasets show that the proposed CapProNet+X could greatly improve the performance of backbone network without incurring large computing and memory overheads. While we only test the capsule projection as the output layer in this paper, we will attempt to insert it into intermediate layers of backbone networks as well, and hope this could give rise to a new generation of capsule networks with more discriminative architectures in future. Acknowledgements L. Zhang and M. Edraki made equal contributions to implementing the idea: L. Zhang conducted experiments on CIFAR10 and SVHN datasets, and visualized projections in capsule subspaces on CIFAR10. M. Edraki performed experiments on CIFAR100. G.-J. Qi initialized and formulated the idea, and prepared the paper.
1. What is the novel approach proposed by CapProNet in representing class labels? 2. How does the proposed method differ from traditional neural networks? 3. What are the advantages of the simplified implementation of CapProNet compared to capsule networks? 4. Can the improved performance of CapProNet be attributed to the parametrized subspace projection? 5. How does the computational cost of CapProNet compare to other models on larger datasets like ImageNet?
Review
Review This paper proposes CapProNet, which uses a (capsule) vector rather than a neuron to represent class labels. The feature vector of an input sample is projected to a parametrized subspace. The idea is inspired by the capsule network, but the implementation is much simper and the computational cost is very small. Experiments on CIFAR10, CIFAR100 and SVHN with different model architecture show the proposed model consistently improves the performance. In general, the proposed model is simple and effective and the experiments are thorough. The authors claim the computational overhead of the proposed model is small, but why you experiment your model only on small scale dataset? It will be very interesting to see how the model perform on large scale dataset (i.e., imagenet).
NIPS
Title CapProNet: Deep Feature Learning via Orthogonal Projections onto Capsule Subspaces Abstract In this paper, we formalize the idea behind capsule nets of using a capsule vector rather than a neuron activation to predict the label of samples. To this end, we propose to learn a group of capsule subspaces onto which an input feature vector is projected. Then the lengths of resultant capsules are used to score the probability of belonging to different classes. We train such a Capsule Projection Network (CapProNet) by learning an orthogonal projection matrix for each capsule subspace, and show that each capsule subspace is updated until it contains input feature vectors corresponding to the associated class. We will also show that the capsule projection can be viewed as normalizing the multiple columns of the weight matrix simultaneously to form an orthogonal basis, which makes it more effective in incorporating novel components of input features to update capsule representations. In other words, the capsule projection can be viewed as a multi-dimensional weight normalization in capsule subspaces, where the conventional weight normalization is simply a special case of the capsule projection onto 1D lines. Only a small negligible computing overhead is incurred to train the network in low-dimensional capsule subspaces or through an alternative hyper-power iteration to estimate the normalization matrix. Experiment results on image datasets show the presented model can greatly improve the performance of the state-of-the-art ResNet backbones by 10− 20% and that of the Densenet by 5− 7% respectively at the same level of computing and memory expenses. The CapProNet establishes the competitive state-of-the-art performance for the family of capsule nets by significantly reducing test errors on the benchmark datasets. 1 Introduction Since the idea of capsule net [15, 9] was proposed, many efforts [8, 17, 14, 1] have been made to seek better capsule architectures as the next generation of deep network structures. Among them are the dynamic routing [15] that can dynamically connect the neurons between two consecutive layers based on their output capsule vectors. While these efforts have greatly revolutionized the idea of building a new generation of deep networks, there are still a large room to improve the state of the art for capsule nets. In this paper, we do not intend to introduce some brand new architectures for capsule nets. Instead, we focus on formalizing the principled idea of using the overall length of a capsule rather than ∗Corresponding author: G.-J. Qi, email: [email protected] and [email protected]. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. a single neuron activation to model the presence of an entity [15, 9]. Unlike the existing idea in literature [15, 9], we formulate this idea by learning a group of capsule subspaces to represent a set of entity classes. Once capsule subspaces are learned, we can obtain set of capsules by performing an orthogonal projection of feature vectors onto these capsule subspaces. Then, one can adopt the principle of separating the presence of an entity and its instantiation parameters into capsule length and orientation, respectively. In particular, we use the lengths of capsules to score the presence of entity classes corresponding to different subspaces, while their orientations are used to instantiate the parameters of entity properties such as poses, scales, deformations and textures. In this way, one can use the capsule length to achieve the intra-class invariance in detecting the presence of an entity against appearance variations, as well as model the equivalence of the instantiation parameters of entities by encoding them into capsule orientations [15]. Formally, each capsule subspace is spanned by a basis from the columns of a weight matrix in the neural network. A capsule projection is performed by projecting input feature vectors fed from a backbone network onto the capsule subspace. Specifically, an input feature vector is orthogonally decomposed into the capsule component as the projection onto a capsule subspace and the complement component perpendicular to the subspace. By analyzing the gradient through the capsule projection, one can show that a capsule subspace is iteratively updated along the complement component that contains the novel characteristics of the input feature vector. The training process will continue until all presented feature vectors of an associated class are well contained by the corresponding capsule subspace, or simply the back-propagated error accounting for misclassification caused by capsule lengths vanishes. We call the proposed deep network with the capsule projections the CapProNet for brevity. The CapProNet is friendly to any existing network architecture – it is built upon the embedded features generated by some neural networks and outputs the projected capsule vectors in the subspaces according to different classes. This makes it amenable to be used together with existing network architectures. We will conduct experiments on image datasets to demonstrate the CapProNet can greatly improve the state-of-the-art results by sophisticated networks with only small negligible computing overhead. 1.1 Our Findings Briefly, we summarize our main findings from experiments upfront about the proposed CapProNet. • The proposed CapProNet significantly advances the capsule net performance [15] by reducing its test error from 10.3% and 4.3% on CIFAR10 and SVHN to 3.64% and 1.54% respectively upon the chosen backbones. • The proposed CapProNet can also greatly reduce the error rate of various backbone networks by adding capsule projection layers into these networks. For example, The error rate can be reduced by more than 10− 20% based on Resnet backbone, and by more than 5− 6% based on densenet backbone, with only < 1% and 0.04% computing and memory overhead in training the model compared with the backbones. • The orthogonal projection onto capsule subspaces plays a critical role in delivering competitive performance. On the contrary, simply grouping neurons into capsules could not obviously improve the performance. This shows the capsule projection plays an indispensable role in the CapProNet delivering competitive results. • Our insight into the gradient of capsule projection in Section 2.3 explains the advantage of updating capsule subspaces to continuously contain novel components of training examples until they are correctly classified. We also find that the capsule projection can be viewed as a high-dimensional extension of weight normalization in Section 2.4, where the conventional weight normalization is merely a simple case of the capsule projection onto 1D lines. The source code is available at https://github.com/maple-research-lab. The remainder of this paper is organized as follows. We present the idea of the Capsule Projection Net (CapProNet) in Section 2, and discuss the implementation details in Section 3. The review of related work follows in Section 4, and the experiment results are demonstrated in Section 5. Finally, we conclude the paper and discuss the future work in Section 6. 2 The Capsule Projection Nets In this section, we begin by shortly revisiting the idea of conventional neural networks in classification tasks. Then we formally present the orthogonal projection of input feature vectors onto multiple capsule subspaces where capsule lengths are separated from their orientations to score the presence of entities belonging to different classes. Finally, we analyze the gradient of the resultant capsule projection by showing how capsule subspaces are updated iteratively to adopt novel characteristics of input feature vectors through back-propagation. 2.1 Revisit: Conventional Neural Networks Consider a feature vector x ∈ Rd generated by a deep network to represent an input entity. Given its ground truth label y ∈ {1, 2, · · · , L}, the output layer of the deep network aims to learn a group of weight vectors {w1,w2, · · · ,wL} such that wTy x > w T l x, for all, l 6= y. (1) This hard constraint is usually relaxed to a differentiable softmax objective, and the backpropagation algorithm is performed to train {w1,w2, · · · ,wL} and the backbone network generating the input feature vector x. 2.2 Capsule Projection onto Subspaces Unlike simply grouping neurons to form capsules for classification, we propose to learn a group of capsule subspaces {S1,S2, · · · ,SL}, each associated with one of L classes. Suppose we have a feature vector x ∈ Rd generated by a backbone network from an input sample. Then, to learn a proper feature representation, we project x onto these capsule subspaces, yielding L capsules {v1,v2, · · · ,vL} as projections. Then, we will use the lengths of these capsules to score the probability of the input sample belonging to different classes by assigning it to the one according to the longest capsule. Formally, for each capsule subspace Sl of dimension c, we learn a weight matrix Wl ∈ Rd×c the columns of which form the basis of the subspace, i.e., Sl = span(Wl) is spanned by the column vectors. Then the orthogonal projection vl of a vector x onto Sl is found by solving vl = argminv∈span(Wl) ‖x − v‖2. This orthogonal projection problem has the following closedform solution vl = Plx, and Pl = WlW + l where Pl is called projection matrix 2 for capsule subspace Sl, and W+l is the Moore-Penrose pseudoinverse [4]. When the columns of Wl are independent, W+l becomes (W T l Wl) −1WTl . In this case, since we only need the capsule length ‖vl‖2 to predict the class of an entity, we have ‖vl‖2 = √ vTl vl = √ xTPTl Plx = √ xTWlΣlWTl x (2) where Σl = (WTl Wl) −1 can be seen as a normalization matrix applied to the transformed feature vector WTl x as a way to normalize the Wl-transformation based on the capsule projection. As we will discuss in the next subsection, this normalization plays a critical role in updating Wl along the orthogonal direction of the subspace so that novel components pertaining to the properties of input entities can be gradually updated to the subspace. In practice, since c d, the c columns of Wl are usually independent in a high-dimensional d-D space. Otherwise, one can always add a small I to WTl Wl to avoid the numeric singularity when taking the matrix inverse. Later on, we will discuss a fast iterative algorithm to compute the matrix inverse with a hyper-power sequence that can be seamlessly integrated with the back-propagation iterations. 2A projection matrix P for a subspace S is a symmetric idempotent matrix (i.e., PT = P and P2 = P) such that its range space is S. 2.3 Insight into Gradients In this section, we take a look at the gradient used to update Wl in each iteration, which can give us some insight into how the CapProNet works in learning the capsule subspaces. Suppose we minimize a loss function ` to train the capsule projection and the network. For simplicity, we only consider a single sample x and its capsule vl. Then by the chain rule and the differential of inverse matrix [13], we have the following gradient of ` wrt Wl ∂` ∂Wl = ∂` ∂‖vl‖2 ∂‖vl‖ ∂Wl = ∂` ∂‖vl‖2 (I−Pl)xxTW+Tl ‖vl‖2 (3) where the operator (I−Pl) can be viewed as the projection onto the orthogonal complement of the capsule subspace spanned by the columns of Wl, W+Tl denotes the transpose of W + l , and the factor ∂` ∂‖vl‖2 is the back-propagated error accounting for misclassification caused by ‖vl‖2. Denote by x⊥ , (I−Pl)x the projection of x onto the orthogonal component perpendicular to the current capsule subspace Sl. Then, the above gradient ∂`∂Wl only contains the columns parallel to x ⊥ (up to coefficients in the vector xTW+Tl ). This shows that the basis of the current capsule subspace Sl in the columns of Wl is updated along this orthogonal component of the input x to the subspace. One can regard x⊥ as representing the novel component of x not yet contained in the current Sl, it shows capsule subspaces are updated to contain the novel component of each input feature vector until all training feature vectors are well contained in these subspaces, or the back-propagated errors vanish that account for misclassification caused by ‖vl‖2. Figure 1 illustrates an example of 2-D capsule subspace S spanned by two basis vectors w1 and w2. An input feature vector x is decomposed into the capsule projection v onto S and an orthogonal complement x⊥ perpendicular to the subspace. In one training iteration, two basis vectors w1 and w2 are updated to w′1 and w ′ 2 along the orthogonal direction x⊥, where x⊥ is viewed as containing novel characteristics of an entity not yet contained by S. 2.4 A Perspective of Multiple-Dimensional Weight Normalization As discussed in the last subsection and Figure 2, we can explain the orthogonal components represent the novel information in input data, and the orthogonal decomposition thus enables us to update capsule subspaces by more effectively incorporating novel characteristics/components than the classic capsule nets. One can also view the capsule projection as normalizing the column basis of weight matrix Wl simultaneously in a high-dimensional capsule space. If the capsule dimension c is set to 1, it is not hard to see that Eq. (2) can be rewritten by setting vl to |WTl x| ‖Wl‖ . It produces the conventional weight normalization of the vector Wl ∈ R d, as a special 1D case of the capsule projection. As the capsule dimension c grows, Wl can be normalized by replacing vl with Σ 1/2 l W T l x, which keeps ‖vl‖ unchanged in Eq. (2). This enables us to extend the conventional weight normalization to high dimensional capsule subspaces. 3 Implementation Details We will discuss some implementation details in this section, including 1) the computing cost to perform capsule projection and a fast iterative method by using hyper-power sequences without restart; 2) the objective to train the capsule projection. 3.1 Computing Normalization Matrix Taking a matrix inverse to get the normalization matrix Σl would be expensive with an increasing dimension c. But after the model is trained, it is fixed in the inference with only one-time computing. Fortunately, the dimension c of a capsule subspace is usually much smaller than the feature dimension d that is usually hundreds and even thousands. For example, c is typically no larger than 8 in experiments. Thus, taking a matrix inverse to compute these normalization matrices only incurs a small negligible computing overhead compared with the training of many other layers in a deep network. Alternatively, one can take advantage of an iterative algorithm to compute the normalization matrix. We consider the following hyper-power sequence Σl ← 2Σl −ΣlWTl WlΣl which has proven to converge to (WTW)−1 with a proper initial point [2, 3]. In stochastic gradient method, since only a small change is made to update Wl in each training iteration, thus it is often sufficient to use this recursion to make an one-step update on the normalization matrix from the last iteration. The normalization matrix Σl can be initialized to (WTl Wl) −1 at the very first iteration to give an ideal start. This could further save computing cost in training the network. In experiments, a very small computing overhead was incurred in the capsule projection. For example, training the ResNet110 on CIFAR10/100 costed about 0.16 seconds per iteration on a batch of 128 images. In comparison, training the CapProNet with a ResNet110 backbone in an end-to-end fashion only costed an additional < 0.001 seconds per iteration, that is less than 1% computing overhead for the CapProNet compared with its backbone. For the inference, we did not find any noticeable computing overhead for the CapProNet compared with its backbone network. 3.2 Training Capsule Projections Given a group of capsule vectors {v1,v2, · · · ,vL} corresponding to a feature vector x and its ground truth label y, we train the model by requiring ‖vy‖2 > ‖vl‖2, for all, l 6= y. In other words, we require ‖vy‖2 should be larger than all the length of the other capsules. As a consequence, we can minimize the following negative logarithmic softmax function `(x, y) = − log exp(‖vy‖2)∑L l=1 exp(‖vl‖2) to train the capsule subspaces and the network generating x through back- propagation in an end-to-end fashion. Once the model is trained, we will classify a test sample into the class with the longest capsule. 4 Related Work The presented CapProNets are inspired by the CapsuleNets by adopting the idea of using a capsule vector rather than a neural activation output to predict the presence of an entity and its properties [15, 9]. In particular, the overall length of a capsule vector is used to represent the existence of the entity and its direction instantiates the properties of the entity. We formalize this idea in this paper by explicitly learning a group of capsule subspaces and project embedded features onto these subspaces. The advantage of these capsule subspaces is their directions can represent characteristics of an entity, which contains much richer information, such as its positions, orientations, scales and textures, than a single activation output. By performing an orthogonal projection of an input feature vector onto a capsule subspace, one can find the best direction revealing these properties. Otherwise, the entity is thought of being absent as the projection vanishes when the input feature vector is nearly perpendicular to the capsule subspace. 5 Experiments We conduct experiments on benchmark datasets to evaluate the proposed CapProNet compared with the other deep network models. 5.1 Datasets We use both CIFAR and SVHN datasets in experiments to evaluate the performance. CIFAR The CIFAR dataset contains 50, 000 and 10, 000 images of 32× 32 pixels for the training and test sets respectively. A standard data augmentation is adopted with horizonal flipping and shifting. The images are labeled with 10 and 100 categories, namely CIFAR10 and CIFAR100 datasets. A separate validation set of 5, 000 images are split from the training set to choose the model hyperparameters, and the final test errors are reported with the chosen hyperparameters by training the model on all 50, 000 training images. SVHN The Street View House Number (SVHN) dataset has 73, 257 and 26, 032 images of colored digits in the training and test sets, with an additional 531, 131 training images available. Following the widely used evaluation protocol in literature [5, 11, 12, 16], all the training examples are used without data augmentation, while a separate validation set of 6, 000 images is split from the training set. The model with the smallest validation error is selected and the error rate is reported. ImageNet The ImageNet data-set consists of 1.2 million training and 50k validation images. We apply mean image subtraction as the only pre-processing step on images and use random cropping, scaling and horizontal flipping for data augmentation [6]. The final resolution of both train and validation sets is 224× 224, and 20k images are chosen randomly from training set for tuning hyper parameters. 5.2 Backbone Networks We test various networks such as ResNet [6], ResNet (pre-activation) [7], WideResNet [18] and Densenet [10] as the backbones in experiments. The last output layer of a backbone network is replaced by the capsule projection, where the feature vector from the second last layer of the backbone is projected onto multiple capsule subspaces. The CapProNet is trained from the scratch in an end-to-end fashion on the given training set. For the sake of fair comparison, the strategies used to train the respective backbones [6, 7, 18], such as the learning rate schedule, parameter initialization, and the stochastic optimization solver, are adopted to train the CapProNet. We will denote the CapProNet with a backbone X by CapProNet+X below. 5.3 Results We perform experiments with various networks as backbones for comparison with the proposed CapProNet. In particular, we consider three variants of ResNets – the classic one reported in [11] with 110 layers, the ResNet with pre-activation [7] with 164 layers, and two paradigms of WideResNets [18] with 16 and 28 layers, as well as densenet-BC [10] with 100 layers. Compared with ResNet and ResNet with pre-activation, WideResNet has fewer but wider layers that reaches smaller error rates as shown in Table 1. We test the CapProNet+X with these different backbone networks to evaluate if it can consistently improve these state-of-the-art backbones. It is clear from Table 1 that the CapProNet+X outperforms the corresponding backbone networks by a remarkable margin. For example, the CapProNet+ResNet reduces the error rate by 19%, 17.5% and 10% on CIFAR10, CIFAR100 and SVHN, while CapProNet+Densenet reduces the error rate by 5.8%, 4.8% and 6.8% respectively. Finally, we note that the CapProNet significantly advances the capsule net performance [15] by reducing its test error from 10.3% and 4.3% on CIFAR10 and SVHN to 3.64% and 1.54% respectively based on the chosen backbones. We also evaluate the CapProNet with Resnet50 and Resnet101 backbones for single crop Top-1/Top-5 results on ImageNet validation set. To ensure fair comparison, we retrain the backbone networks based on the offical Resnet model3, where both original Resnet[6] and CapProNet are trained with the same training strategies on four GPUs. The results are reported in Table 2, where CapProNet+X 3https://github.com/tensorflow/models/tree/master/official/resnet successfully outperforms the original backbones on both Top-1 and Top-5 error rates. It is worth noting the gains are only obtained with the last layer of backbones replaced by the capsule project layer. We believe the error rate can be further reduced by replacing the intermediate convolutional layers with the capsule projections, and we leave it to our future research. We also note that the CapProNet+X consistently outperforms the backbone counterparts with varying dimensions c of capsule subspaces. In particular, with the WideResNet backbones, in most cases, the error rates are reduced with an increasing capsule dimension c on all datasets, where the smallest error rates often occur at c = 8. In contrast, while CapProNet+X still clearly outperforms both ResNet and ResNet (pre-activation) backbones, the error rates are roughly at the same level. This is probably because both ResNet backbones have a much smaller input dimension d = 64 of feature vectors into the capsule projection than that of WideResNet backbone where d = 128 and d = 160 with 16 and 28 layers, respectively. This turns out to suggest that a larger input dimension can enable to use capsule subspaces of higher dimensions to encode patterns of variations along more directions in a higher dimensional input feature space. To further assess the effect of capsule projection, we compare with the method that simply groups the output neurons into capsules without performing orthogonal projection onto capsule subspaces. We still use the lengths of these resultant “capsules" of grouped neurons to classify input images and the model is trained in an end-to-end fashion accordingly. Unfortunately, this approach, namely GroupNeuron+ResNet in Table 3, does not show significant improvement over the backbone network. For example, the smallest error rate by GroupNeuron+ResNet is 6.26 at c = 2, a small improvement over the error rate of 6.41 reached by ResNet110. This demonstrates the capsule projection makes an indispensable contribution to improving model performances. When training on CIFAR10/100 and SVHN, one iteration typically costs ∼ 0.16 seconds for Resnet110, with an additional less than 0.01 second to train the corresponding CapProNet. That is less than 1% computing overhead. The memory overhead for the model parameters is even smaller. For example, the CapProNet+ResNet only has an additional 640− 6400 parameters at c = 2 compared with 1.7M parameters in the backbone ResNet. We do not notice any large computing or memory overheads with the ResNet (pre-activation) or WideResNet, either. This shows the advantage of CapProNet+X as its error rate reduction is not achieved by consuming much more computing and memory resources. 5.4 Visualization of Projections onto Capsule Subspaces Table 3: Comparison between GroupNeuron and CapProNet with the ResNet110 backbone on CIFAR10 dataset. The best results are highlighted in bold for c = 2, 4, 8 capsules. It shows the need of capsule projection to obtain better results. c GroupNeuron CapProNet 2 6.26 5.24 4 6.29 5.27 8 6.42 5.19 To give an intuitive insight into the learned capsule subspaces, we plot the projection of input feature vectors onto capsule subspaces. Instead of directly using Plx to project feature vectors onto capsule subspaces in the original input space Rd, we use (WTl Wl)− 1 2 WTl x to project an input feature vector x onto Rc, since this projection preserves the capsule length ‖vl‖2 defined in (2). Figure 2 illustrates the 2-D capsule subspaces learned on CIFAR10 when c = 2 and d = 64 in CapProNet+ResNet110, where each subspace corresponds to one of ten classes. Red points represent the capsules projected from the class of input samples corresponding to the subspace while green points correspond to one of the other classes. The figure shows that red capsules have larger length than green ones, which suggests the capsule length is a valid metric to classify samples into their corresponding classes. Meanwhile, the orientation of a capsule reflects various instantiations of a sample in these subspaces. These figures visualize the separation of the lengths of capsules from their orientations in classification tasks. 6 Conclusions and Future Work In this paper, we present a novel capsule project network by learning a group of capsule subspaces for different classes. Specifically, the parameters of an orthogonal projection is learned for each class and the lengths of projected capsules are used to predict the entity class for a given input feature vector. The training continues until the capsule subspaces contain input feature vectors of corresponding classes or the back-propagated error vanishes. Experiment results on real image datasets show that the proposed CapProNet+X could greatly improve the performance of backbone network without incurring large computing and memory overheads. While we only test the capsule projection as the output layer in this paper, we will attempt to insert it into intermediate layers of backbone networks as well, and hope this could give rise to a new generation of capsule networks with more discriminative architectures in future. Acknowledgements L. Zhang and M. Edraki made equal contributions to implementing the idea: L. Zhang conducted experiments on CIFAR10 and SVHN datasets, and visualized projections in capsule subspaces on CIFAR10. M. Edraki performed experiments on CIFAR100. G.-J. Qi initialized and formulated the idea, and prepared the paper.
1. What is the main contribution of the paper, and how does it relate to the original capsule network? 2. What are the strengths and weaknesses of the proposed method, particularly in its ability to enhance accuracy and its relation to the core thoughts of the capsule network? 3. How does the author handle inverse matrix gradient propagation, and is it efficient enough? 4. What are some limitations in the comparisons made in the paper, and how could they be improved? 5. Are there any areas where the presentation of the paper could be improved, such as the arrangement of tables or the clarity of certain discussions?
Review
Review This paper adopts the capsule vector idea from the capsule network and proposes the idea of dividing the feature space into multiple orthogonal subspace, one for each classification target category. Given a feature vector, this model first project it onto multiple orthogonal subspace and then use the 2 norm of the image vector to calculate softmax probability. Experiments show promising accuracy enhance. The idea is valuable, but it may have discarded some core thoughts of original capsule network. Pros: 1. It is a novel idea to project feature vector into orthogonal subspace. It’s motivated by the vector representation and length-to-probability ideas from capsule network, and the author did one more step to uncover new things. This is an interesting idea with a good math intuition. The proposed orthogonal subspace projection provides a novel and effective method to formalize the principled idea of using the overall length of a capsule. 2. On several image classification benchmarks, the CapProNet shows improvement (10%-20% reduced error rate) than state-of-art ResNet by incorparating the CapProNet as the last output layer. Additionally, the ResNet with CapProNet caused only <1% and 0.04% computing and memory overhead during model training than original ResNet. The CapProNet is highly extensible for many existing network. It might become a useful method that can be used in a more general setup. 3. The way the authors handle inverse matrix gradient propagation is interesting. 4. The presentation of the paper is clear; e.g., the presentation of visualization results of projections onto capsule subspaces in Section 5.2 is good. Cons: 1. It may be arguable if models presented in this paper should be called a capsule network: only the neuron group idea is inherited from the capsule network paper and other valuable core thoughts are discarded. For example, the capsule network introduces dynamic routing which grab confident activation through coincidence filtering, and different levels of capsules can learn part-whole hierarchy. However in this paper the second last layer is a single feature vector, which is bound to diverge from some core thoughts because it is likely we cannot find pattern’s that agrees during votes. 2. While the way the authors handle inverse matrix gradient propagation is interesting and it does not harm the gradient that is back propagated toward lower layers, I wonder whether it’s efficient enough to perform the m*n route by agreement scheme proposed by the original capsule paper. 3. The comparison in Table 1 doesn't include the latest state-of-art models on these image classification benchmarks, i.e., DenseNet (Huang et al., 2017) which achieves better results than CapProNet/ResNet on CIFAR-10/100. I think it may be more convincing to perform CapProNet experimants based on DenseNet or other latest state-or-art models. The comparison in Table 2 was not detailed enough. There should be more description in this part since CapProNet is very similar to "GroupNeuron" in the surface form. More detailed and analytical discussion between CapProNet and "GroupNeuron" would be helpful. 4. Still the paper can be better written, e.g., improving the positions/arrangement of tables and fixing exiting typos.